-
Posts
18729 -
Joined
-
Last visited
-
Days Won
708
Everything posted by Nytro
-
Da, stirea legata de CryptoAG am vazut ca e veche si probabil e recirculata, fara prea multe verificari. Gen "Ambulanta neagra care fura copii" daca sunteti familiari cu stirile din spatiul public romanesc. Legat de "Man in The Middle", nu cred ca asta era problema in acest caz. Nu am citit detalii tehnice despre ce s-a intamplat, insa eu ma gandesc la un "Master key" cu care aveau posibilitatea sa decrypteze ulterior date pe care le putea obtine prin orice metoda, chiar si pe hartie de exemplu. Chiar daca nu exista un "Master key" poate exista o problema care permitea un "bruteforce rapid" al datelor cryptate. Ma gandesc si eu, nu am idee.
-
Una dintre aplicaţiile preinstalate pe telefoanele Samsung, descoperită trimiţând datele utilizatorilor unei companii chinezeşti Aurelian Mihai - 7 Feb 2020 Problemele pentru Samsung au apărut după ce o investigaţie a arătat că funcţia de curăţare a spaţiului de stocare de fişiere nedorite este implementată cu ajutorul unui software furnizat de o companie din China cu reputaţie dubioasă, Qihoo 360, cunoscută pentru practicile abuzive de colectare a datelor despre utilizatori, cu scopul vânzării acestora către companii de publicitate. În mod predictibil, vestea nu a fost bine primită de comunitatea Android, iar clarificările oferite de Samsung, cum că întreg procesul de scanare şi înlăturare a fişierelor junk este gestionat pe dispozitiv iar pe serverele Qihoo sunt încărcate doar informaţii generice, nu a mulţumit pe toată lumea. Pentru a elimina orice îndoieli rămase, Samsung a mers mai departe creând o actualizare de Android 10 care înlătură software-ul furnizat de Qihoo 360 de pe dispozitivele utilizatorilor, chiar dacă asta înseamnă şi dispariţia funcţiei respective din aplicaţia Device Care. Este de aşteptat ca Samsung să reintroducă funcţia de curăţare a fişierelor redundante cu o actualizare viitoare a aplicaţiei Device Care, folosind software dezvoltat de inginerii companiei sau comandat de la alt furnizor cu reputaţie ceva mai bună. Sursa: https://www.go4it.ro/aplicatii/una-dintre-aplicatiile-preinstalate-pe-telefoanele-samsung-descoperita-trimitand-datele-utilizatorilor-unei-companii-chinezesti-18808744/
-
Salut, sunt destule aplicatii care fac asta, nu cred ca e necesar unul nou. https://hackertarget.com/brute-forcing-passwords-with-ncrack-hydra-and-medusa/
-
Exfiltrating Data from Air-Gapped Computers Using Screen Brightness February 05, 2020Mohit Kumar It may sound creepy and unreal, but hackers can also exfiltrate sensitive data from your computer by simply changing the brightness of the screen, new cybersecurity research shared with The Hacker News revealed. In recent years, several cybersecurity researchers demonstrated innovative ways to covertly exfiltrate data from a physically isolated air-gapped computer that can't connect wirelessly or physically with other computers or network devices. These clever ideas rely on exploiting little-noticed emissions of a computer's components, such as light, sound, heat, radio frequencies, or ultrasonic waves, and even using the current fluctuations in the power lines. For instance, potential attackers could sabotage supply chains to infect an air-gapped computer, but they can't always count on an insider to unknowingly carry a USB with the data back out of a targeted facility. When it comes to high-value targets, these unusual techniques, which may sound theoretical and useless to many, could play an important role in exfiltrating sensitive data from an infected but air-gapped computer. How Does the Brightness Air-Gapped Attack Work? In his latest research with fellow academics, Mordechai Guri, the head of the cybersecurity research center at Israel's Ben Gurion University, devised a new covert optical channel using which attackers can steal data from air-gapped computers without requiring network connectivity or physically contacting the devices. "This covert channel is invisible, and it works even while the user is working on the computer. Malware on a compromised computer can obtain sensitive data (e.g., files, images, encryption keys, and passwords), and modulate it within the screen brightness, invisible to users," the researchers said. The fundamental idea behind encoding and decoding of data is similar to the previous cases, i.e., malware encodes the collected information as a stream of bytes and then modulate it as '1' and '0' signal. In this case, the attacker uses small changes in the LCD screen brightness, which remains invisible to the naked eye, to covertly modulate binary information in morse-code like patterns "In LCD screens each pixel presents a combination of RGB colors which produce the required compound color. In the proposed modulation, the RGB color component of each pixel is slightly changed." "These changes are invisible, since they are relatively small and occur fast, up to the screen refresh rate. Moreover, the overall color change of the image on the screen is invisible to the user." The attacker, on the other hand, can collect this data stream using video recording of the compromised computer's display, taken by a local surveillance camera, smartphone camera, or a webcam and can then reconstruct exfiltrated information using image processing techniques. As shown in the video demonstration shared with The Hacker News, researchers infected an air-gapped computer with specialized malware that intercepts the screen buffer to modulate the data in ASK by modifying the brightness of the bitmap according to the current bit ('1' or '0'). You can find detailed technical information on this research in the paper [PDF] titled, 'BRIGHTNESS: Leaking Sensitive Data from Air-Gapped Workstations via Screen Brightness,' published yesterday by Mordechai Guri, Dima Bykhovsky and Yuval Elovici. Air-Gapped Popular Data Exfiltration Techniques It's not the first time Ben-Gurion researchers came up with a covert technique to target air-gapped computers. Their previous research of hacking air-gap machines include: PowerHammer attack to exfiltrate data from air-gapped computers through power lines. MOSQUITO technique using which two (or more) air-gapped PCs placed in the same room can covertly exchange data via ultrasonic waves. BeatCoin technique that could let attackers steal private encryption keys from air-gapped cryptocurrency wallets. aIR-Jumper attack that takes sensitive information from air-gapped computers with the help of infrared-equipped CCTV cameras that are used for night vision. MAGNETO and ODINI techniques use CPU-generated magnetic fields as a covert channel between air-gapped systems and nearby smartphones. USBee attack that can be used to steal data from air-gapped computers using radio frequency transmissions from USB connectors. DiskFiltration attack that can steal data using sound signals emitted from the hard disk drive (HDD) of the targeted air-gapped computer; BitWhisper that relies on heat exchange between two computer systems to stealthily siphon passwords or security keys; AirHopper that turns a computer's video card into an FM transmitter to capture keystrokes; Fansmitter technique that uses noise emitted by a computer fan to transmit data; and GSMem attack that relies on cellular frequencies. Have something to say about this article? Comment below or share it with us on Facebook, Twitter or our LinkedIn Group. Sursa: https://thehackernews.com/2020/02/hacking-air-gapped-computers.html?m=1
- 1 reply
-
- 2
-
-
-
Un plugin de Wordpress poate permite atacatorilor să preia controlul site-urilor. Utilizatorii ar trebui să îl actualizeze imediat Cătălin Niţu - 4 Feb 2020 Dacă aveţi un site realizat pe platforma Wordpress, probabil că ar trebui să faceţi update cât mai rapid la unul dintre plugin-urile foarte populare, pe care s-ar putea să îl folosiţi. Este vorba despre Code Snippets, un plugin foarte util, care permite rularea de cod PHP fără a necesita editarea fişierului de funcţii din Wordpress. Problema a fost depistată de cercetători în securitate, care au descoperit că prin intermediul acestui plugin, poţi integra cod nesemnat care să permită atacatorilor să preia controlul site-ului. Din fericire, dezvoltatorii Code Snippets au rezolvat deja problema şi nu mai permit rularea de cod care necesită drepturi de administrator. Astfel, este de ajuns să intri în dashboard-ul Wordpress şi să cauţi secţiunea de actualizare, unde ar trebui să apară update-ul pentru Code Snippets. Pentru cei care preferă metoda manuală, este de ajuns să descarci Code Snippets în format .zip de pe site-ul oficial şi să îl instalezi manual tot din dashboard. Conform informaţiilor disponbile în acest moment, există mai mult de 200.000 de site-uri care folosesc acest plugin şi care pot fi vulnerabile la un astfel de atac. Totuşi, codul maliţios trebuie introdus manual de către administrator, deci pericolul nu este atât de iminent pentru toţi utilizatorii. Dacă nu aveţi posibilitatea de a face update prea curând, încercaţi în schimb să nu introduceţi cod PHP din surse care nu sunt de încredere, sau care nu ştiţi exact ce face, în acest plugin. Totuşi, vulnerabilităţile în platforma Wordpress şi în diverse plugin-uri populare nu sunt tocmai ieşite din comun. În trecut au fost realizate atacuri folosind un plugin pentru un formular de contact şi vulnerabilităţi care au fost corectate în timp. Este indicat să aveţi întotdeauna versiunea Wordpress la zi şi plugin-urile actualizate. Uneori însă, compatibilitatea dintre platformă şi plugin-uri se strică la update. Cel mai indicat este însă să folosiţi cât mai puţine plugin-uri complexe, pentru a asigura o viteză de încărcare mai mare. Sursa: https://www.go4it.ro/internet/un-plugin-de-wordpress-poate-permite-atacatorilor-sa-preia-controlul-site-urilor.-utilizatorii-ar-trebui-sa-il-actualizeze-imediat-18787594/?
- 1 reply
-
- 2
-
-
Vulnerabilitatea afecteaza si Mac-urile: https://www.techradar.com/news/linux-and-macos-pcs-hit-by-serious-sudo-vulnerability
-
Nu am investit si probabil nu voi investi niciodata in cryptomonede, dar de curiozitate, a castigat cineva ceva consistent de pe urma acestor tranzactii? Ma refer aici la sume de mii de euro sau mai mult, nu la 50 EUR.
-
Pentru cei interesati de acest "utilitar", a aparut si versiunea pentru Mac.
-
Desktop Goose Check me out on twitter at @samnchiet HONK HONK, HEAR YE. I have created a goose for your desktop. He'll nab your mouse, track mud on your screen... leave you a message, deliver you memes? Play video games with a desktop buddy who will attack you if you poke him. Fill out spreadsheets while your screen fills up with instances of Goose Notepad. STREAMERS/YOUTUBERS - DM me on twitter for a custom build, with AI written to be more antagonistic towards gameplay. This is not a final itch page - just trying to get something up so I can upload the project tonight! More information Download Download NowName your own price Click download now to get access to the following files: Desktop Goose v0.13 MB Sursa: https://samperson.itch.io/desktop-goose
- 1 reply
-
- 3
-
-
A message from Avast CEO Ondrej Vlcek Avast, 29 January 2020 To all our valued stakeholders – customers, partners, employees and investors, I’d like to take this opportunity and address the situation regarding Avast’s sale of user data through its subsidiary Jumpshot. Avast’s core mission is to keep people around the world safe and secure, and I realize the recent news about Jumpshot has hurt the feelings of many of you, and rightfully raised a number of questions – including the fundamental question of trust. As CEO of Avast, I feel personally responsible and I would like to apologize to all concerned. Protecting people is Avast’s top priority and must be embedded in everything we do in our business and in our products. Anything to the contrary is unacceptable. For these reasons, I – together with our board of directors – have decided to terminate the Jumpshot data collection and wind down Jumpshot’s operations, with immediate effect. To understand why we have come to this decision, let me give you some context. We started Jumpshot in 2015 with the idea of extending our data analytics capabilities beyond core security. This was during a period where it was becoming increasingly apparent that cybersecurity was going to be a big data game. We thought we could leverage our tools and resources to do this more securely than the countless other companies that were collecting data. Jumpshot has operated as an independent company from the very beginning, with its own management and board of directors, building their products and services via the data feed coming from the Avast antivirus products. During all those years, both Avast and Jumpshot acted fully within legal bounds – and we very much welcomed the introduction of GDPR in the European Union in May 2018, as it was a rigorous legal framework addressing how companies should treat customer data. Both Avast and Jumpshot committed themselves to 100% GDPR compliance. When I took on the role as CEO of Avast seven months ago, I spent a lot of time re-evaluating every portion of our business. During this process, I came to the conclusion that the data collection business is not in line with our privacy priorities as a company in 2020 and beyond. It is key to me that Avast’s sole purpose is to make the world a safer place, and I knew that ultimately, everything in the company would have to become aligned with that North Star of ours. While the decision we have made will regrettably impact hundreds of loyal Jumpshot employees and dozens of its customers, it is absolutely the right thing to do. I firmly believe it will help Avast focus on and unlock its full potential to deliver on its promise of security and privacy. And I especially thank our users, whose recent feedback accelerated our decision to take quick action. This change represents a new chapter in Avast’s history of keeping people around the world safe and secure. We’re excited to demonstrate our commitment to innovation and security priorities – with a singular focus in 2020 and beyond. Thank you for your continued support and the trust you are putting into us. We will not disappoint. Respectfully yours, Ondrej Vlcek, CEO Avast Sursa: https://blog.avast.com/a-message-from-ceo-ondrej-vlcek?
-
Leaked Documents Expose the Secretive Market for Your Web Browsing Data An Avast antivirus subsidiary sells 'Every search. Every click. Every buy. On every site.' Its clients have included Home Depot, Google, Microsoft, Pepsi, and McKinsey. by Joseph Cox Jan 27 2020, 4:00pm ShareTweet Snap Image: Hunter French An antivirus program used by hundreds of millions of people around the world is selling highly sensitive web browsing data to many of the world's biggest companies, a joint investigation by Motherboard and PCMag has found. Our report relies on leaked user data, contracts, and other company documents that show the sale of this data is both highly sensitive and is in many cases supposed to remain confidential between the company selling the data and the clients purchasing it. The documents, from a subsidiary of the antivirus giant Avast called Jumpshot, shine new light on the secretive sale and supply chain of peoples' internet browsing histories. They show that the Avast antivirus program installed on a person's computer collects data, and that Jumpshot repackages it into various different products that are then sold to many of the largest companies in the world. Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Sephora, Home Depot, Condé Nast, Intuit, and many others. Some clients paid millions of dollars for products that include a so-called "All Clicks Feed," which can track user behavior, clicks, and movement across websites in highly precise detail. Avast claims to have more than 435 million active users per month, and Jumpshot says it has data from 100 million devices. Avast collects data from users that opt-in and then provides that to Jumpshot, but multiple Avast users told Motherboard they were not aware Avast sold browsing data, raising questions about how informed that consent is. The data obtained by Motherboard and PCMag includes Google searches, lookups of locations and GPS coordinates on Google Maps, people visiting companies' LinkedIn pages, particular YouTube videos, and people visiting porn websites. It is possible to determine from the collected data what date and time the anonymized user visited YouPorn and PornHub, and in some cases what search term they entered into the porn site and which specific video they watched. Do you know about any other companies selling data? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, OTR chat on jfcox@jabber.ccc.de, or email joseph.cox@vice.com. Although the data does not include personal information such as users' names, it still contains a wealth of specific browsing data, and experts say it could be possible to deanonymize certain users. In a press release from July, Jumpshot claims to be "the only company that unlocks walled garden data" and seeks to "provide marketers with deeper visibility into the entire online customer journey." Jumpshot has previously discussed some of its clients publicly. But other companies mentioned in Jumpshot documents include Expedia, IBM, Intuit, which makes TurboTax, Loreal, and Home Depot. Employees are instructed not to talk publicly about Jumpshot's relationships with these companies. "It's very granular, and it's great data for these companies, because it's down to the device level with a timestamp," the source said, referring to the specificity and sensitivity of the data being sold. Motherboard granted the source anonymity to speak more candidly about Jumpshot's processes. Until recently, Avast was collecting the browsing data of its customers who had installed the company's browser plugin, which is designed to warn users of suspicious websites. Security researcher and AdBlock Plus creator Wladimir Palant published a blog post in October showing that Avast harvest user data with that plugin. Shortly after, browser makers Mozilla, Opera, and Google removed Avast's and subsidiary AVG's extensions from their respective browser extension stores. Avast had previously explained this data collection and sharing in a blog and forum post in 2015. Avast has since stopped sending browsing data collected by these extensions to Jumpshot, Avast said in a statement to Motherboard and PCMag. An infographic showing the supply chain of browsing data from Avast through to Jumpshot's clients. Image: Motherboard However, the data collection is ongoing, the source and documents indicate. Instead of harvesting information through software attached to the browser, Avast is doing it through the anti-virus software itself. Last week, months after it was spotted using its browser extensions to send data to Jumpshot, Avast began asking its existing free antivirus consumers to opt-in to data collection, according to an internal document. "If they opt-in, that device becomes part of the Jumpshot Panel and all browser-based internet activity will be reported to Jumpshot," an internal product handbook reads. "What URLs did these devices visit, in what order and when?" it adds, summarising what questions the product may be able to answer. Senator Ron Wyden, who in December asked Avast why it was selling users' browsing data, said in a statement, "It is encouraging that Avast has ended some of its most troubling practices after engaging constructively with my office. However I’m concerned that Avast has not yet committed to deleting user data that was collected and shared without the opt-in consent of its users, or to end the sale of sensitive internet browsing data. The only responsible course of action is to be fully transparent with customers going forward, and to purge data that was collected under suspect conditions in the past." Despite Avast currently asking users to opt back into the data collection via a pop-up in the antivirus software, multiple Avast users said they did not know that Avast was selling browsing data. "I was not aware of this," Keith, a user of the free Avast antivirus product who only provided their first name, told Motherboard. "That sounds scary. I usually say no to data tracking," they said, adding that they haven't yet seen the new opt-in pop-up from Avast. "Did not know that they did that :(," another free Avast antivirus user said in a Twitter direct message. Motherboard and PCMag contacted over two dozen companies mentioned in internal documents. Only a handful responded to questions asking what they do with data based on the browsing history of Avast users. "We sometimes use information from third-party providers to help improve our business, products and services. We require these providers to have the appropriate rights to share this information with us. In this case, we receive anonymized audience data, which cannot be used to identify individual customers," a Home Depot spokesperson wrote in an emailed statement. Microsoft declined to comment on the specifics of why it purchased products from Jumpshot, but said that it doesn't have a current relationship with the company. A Yelp spokesperson wrote in an email, "In 2018, as part of a request for information by antitrust authorities, Yelp's policy team was asked to estimate the impact of Google’s anticompetitive behavior on the local search marketplace. Jumpshot was engaged on a one-time basis to generate a report of anonymized, high-level trend data which validated other estimates of Google’s siphoning of traffic from the web. No PII was requested or accessed." "Every search. Every click. Every buy. On every site." Southwest Airlines said it had discussions with Jumpshot but didn't reach an agreement with the company. IBM said it did not have a record of being a client, and Altria said it is not working with Jumpshot, although didn't specify if it did so previously. Google did not respond to a request for comment. On its website and in press releases, Jumpshot names Pepsi, and consulting giants Bain & Company and McKinsey as clients. As well as Expedia, Intuit, and Loreal, other companies which are not already mentioned in public Jumpshot announcements include coffee company Keurig, YouTube promotion service vidIQ, and consumer insights firm Hitwise. None of those companies responded to a request for comment. On its website, Jumpshot lists some previous case studies for using its browsing data. Magazine and digital media giant Condé Nast, for example, used Jumpshot's products to see whether the media company's advertisements resulted in more purchases on Amazon and elsewhere. Condé Nast did not respond to a request for comment. ALL THE CLICKS Jumpshot sells a variety of different products based on data collected by Avast's antivirus software installed on users' computers. Clients in the institutional finance sector often buy a feed of the top 10,000 domains that Avast users are visiting to try and spot trends, the product handbook reads. Another Jumpshot product is the company's so-called "All Click Feed." It allows a client to buy information on all of the clicks Jumpshot has seen on a particular domain, like Amazon.com, Walmart.com, Target.com, BestBuy.com, or Ebay.com. In a tweet sent last month intended to entice new clients, Jumpshot noted that it collects "Every search. Every click. Every buy. On every site" [emphasis Jumpshot's.] Jumpshot's data could show how someone with Avast antivirus installed on their computer searched for a product on Google, clicked on a link that went to Amazon, and then maybe added an item to their cart on a different website, before finally buying a product, the source who provided the documents explained. One company that purchased the All Clicks Feed is New York-based marketing firm Omnicom Media Group, according to a copy of its contract with Jumpshot. Omnicom paid Jumpshot $2,075,000 for access to data in 2019, the contract shows. It also included another product called "Insight Feed" for 20 different domains. The fee for data in 2020 and then 2021 is listed as $2,225,000 and $2,275,000 respectively, the document adds. A section of an internal Jumpshot document obtained by Motherboard and PCMag. Motherboard has reconstructed the document rather than provide a direct screenshot. Jumpshot gave Omnicom access to all click feeds from 14 different countries around the world, including the U.S., England, Canada, Australia, and New Zealand. The product also includes the inferred gender of users "based on browsing behavior," their inferred age, and "the entire URL string" but with personally identifiable information (PII) removed, the contract adds. Omnicom did not respond to multiple requests for comment. According to the Omnicom contract, the "device ID" of each user is hashed, meaning the company buying the data should not be able to identify who exactly is behind each piece of browsing activity. Instead, Jumpshot's products are supposed to give insights to companies who may want to see what products are particularly popular, or how effective an ad campaign is working. "What we don't do is report on the Jumpshot Device ID that executed the clicks to protect against the triangulation of PII," one internal Jumpshot document reads. But Jumpshot's data may not be totally anonymous. The internal product handbook says that device IDs do not change for each user, "unless a user completely uninstalls and reinstalls the security software." Numerous articles and academic studies have shown how it is possible to unmask people using so-called anonymized data. In 2006, New York Times reporters were able to identify a specific person from a cache of supposedly anonymous search data that AOL publicly released. Although the tested data was more focused on social media links, which Jumpshot redacts somewhat, a 2017 study from Stanford University found it was possible to identify people from anonymous web browsing data. "De-identification has shown to be a very failure-prone process. There are so many ways it can go wrong," Günes Acar, who studies large-scale internet tracking at the Computer Security and Industrial Cryptography research group at the Department of Electrical Engineering of the Katholieke Universiteit Leuven, said. A section of an internal Jumpshot document obtained by Motherboard and PCMag. Motherboard has reconstructed the document rather than provide a direct screenshot. De-anonymization becomes a greater concern when considering how the eventual end-users of Jumpshot's data could combine it with their own data. "Most of the threats posed by de-anonymization—where you are identifying people—comes from the ability to merge the information with other data," Acar said. A set of Jumpshot data obtained by Motherboard and PCMag shows how each visited URL comes with a precise timestamp down to the millisecond, which could allow a company with its own bank of customer data to see one user visiting their own site, and then follow them across other sites in the Jumpshot data. "It's almost impossible to de-identify data," Eric Goldman, a professor at the Santa Clara University School of Law, said. "When they promise to de-identify the data, I don't believe it." Motherboard and PCMag asked Avast a series of detailed questions about how it protects user anonymity as well as details on some of the company's contracts. Avast did not answer most of the questions but wrote in a statement, "Because of our approach, we ensure that Jumpshot does not acquire personal identification information, including name, email address or contact details, from people using our popular free antivirus software." "Users have always had the ability to opt out of sharing data with Jumpshot. As of July 2019, we had already begun implementing an explicit opt-in choice for all new downloads of our AV, and we are now also prompting our existing free users to make an explicit choice, a process which will be completed in February 2020," it said, adding that the company complies with the California Consumer Privacy Act (CCPA) and Europe's General Data Protection Regulation (GDPR) across its entire global user base. "We have a long track record of protecting users’ devices and data against malware, and we understand and take seriously the responsibility to balance user privacy with the necessary use of data," the statement added. "It's almost impossible to de-identify data." When PCMag installed Avast's antivirus product for the first time this month, the software did ask if they wanted to opt-in to data collection. "If you allow it, we'll provide our subsidiary Jumpshot Inc. with a stripped and de-identified data set derived from your browsing history for the purpose of enabling Jumpshot to analyze markets and business trends and gather other valuable insights," the opt-in message read. The pop-up did not go into detail on how Jumpshot then uses this browsing data, however. "The data is fully de-identified and aggregated and cannot be used to personally identify or target you. Jumpshot may share aggregated insights with its customers," the pop-up added. Just a few days ago, the Twitter account for Avast subsidiary AVG tweeted, "Do you remember the last time you cleaned your #browser history? Storing your browsing history for a long time can take up memory on your device and can put your private info at risk." Sursa: https://www.vice.com/en_us/article/qjdkq7/avast-antivirus-sells-user-browsing-data-investigation
-
- 1
-
-
Oh, intotdeauna am crezut ca LinkedIn Learning ala e o prostie, dar daca e Lynda (nu stiam), nu e rau. Imi cumparasem acum ceva timp un cont de Pluralsight, dar nu am avut timp de el si l-am inchis. Am avut acces o perioada si pe Safari Online (cel de la O'Reilly) dar nu am facut ceva concret, ci browsing prin multe carti si citit doar parti interesante. Ah, da, ma gandesc sa iau Pentester-Academy, e foarte OK calitate si mai ales pret.
-
Eu nu am mai facut nimic util in ultimele zile (din punct de vedere "security") dar cred ca imi fac ceva timp sa fac update la proiectele de pe GitHub. Apoi vreau sa citesc, fie Windows Internals 7 Part I, fie Gray Hat Hacking 5th Edition.
-
On Tueday, a critical vulnerability in Microsoft's CryptoAPI was patched - it can allow an attacker to generate a CA that is considered trusted by the system, allowing attacks on TLS, code signing and co. In this video, we look at how exactly that vulnerably works, and how we can attack it using Oliver Lyak's proof-of-concept! If you don't know public key cryptography or want to learn more about EC, check the ArsTechnica EC primer: https://arstechnica.com/information-t... The awesome PoC: https://github.com/ollypwn/CVE-2020-0601 Thomas Ptacek's explanation: https://news.ycombinator.com/item?id=... The NSA advisory: https://media.defense.gov/2020/Jan/14... Kudelski Blogpost: https://research.kudelskisecurity.com... ArsTechnica Article: https://arstechnica.com/information-t...
-
- 2
-
-
-
Mitigations ASLR Arc4random Atexit hardening Development practises Disk encryption Embargoes handling Explicit_bzero and bzero Fork and exec Fuzzing KARL (Kernel Address Randomized Link) L1 Terminal Fault (L1TF), aka Foreshadow Lazy bindings Libc symbols randomization Library order randomization MAP_CONCEAL MAP_STACK Mandatory W^X in userland Microarchitectural Data Sampling, aka Fallout, RIDL and Zombieload Missing mitigations NULL-deref in kernel-land to code execution PID randomization Packages updates Papers, academic research and threat model Passwords hashing Pledge Position independent code Privsep and privdrop RELRO RETGUARD and stack canaries ROP gadgets removal Rootless Xorg SMAP, SMEP and their friends SROP mitigation SWAPGS — CVE-2019-1125 Secure boot and trusted boot Secure levels Setjmp and longjmp Signify Spectre v1 — CVE-2017-5753 Spectre v2 — CVE-2017-5715 Spectre v3, aka Meltdown — CVE-2017-5754 Stack clash Stance on memory-safe languages Support of %n in printf TCP SYN cookies TIOCSTI hardening TRAPSLED Tarpit Unveil Userland heap management W^X W^X "refinement" 2019 — stein — CC-BY-SA Sursa: https://isopenbsdsecu.re/mitigations/
-
RDP to RCE: When Fragmentation Goes Wrong Saturday, January 18, 2020 Tags: exploit CVE-2020-0609 CVE-2020-0610 Remote Desktop Gateway (RDG), previously known as Terminal Services Gateway, is a Windows Server component that provides routing for Remote Desktop (RDP). Rather then users connecting directly to an RDP Server, users instead connect and authenticate to the gateway. Upon successful authentication, the gateway will forward RDP traffic to an address specified by the user, essentially acting as a proxy. The idea is that only the gateway needs to be exposed to the Internet, leaving all RDP Servers safely behind the firewall. Due to the fact that RDP is a much larger attack surface, a setup properly using RDG can significantly reduce an organization’s attack surface. In the January 2020 security update, Microsoft addressed two vulnerabilities in RDG. The bugs, CVE-2020-0609 and CVE-2020-0610, both allow for pre-authentication remote code execution. Looking at the diff The first step to analyze these bugs is to look at the difference between the original and patched versions of the affected DLL. A BinDiff of the RDG executable before and after installing the patch. It is clear only one function has been been modified. RDG supports three different protocols: HTTP, HTTPS, and UDP. The updated function is responsible for handling the latter. Normally, one would show a side-by-side comparison of the function before and after patch. Unfortunately, the code is extremely large and there are many changes. Instead, we have opted to present a pseudo-code representation of the function, in which irrelevant code has been stripped. Pseudo-code for the UDP handler function The RDG UDP protocol allows for large messages to be split across multiple separate UDP packets. Due to the property that UDP is connectionless, packets can arrive out of order. The job of this function is to re-assemble messages, ensuring each part is in the correct place. Every packet contains a header containing with the following fields: fragment_id: the packet’s position in the sequence num_fragments: the total number of packets in the sequence fragment_length: the length of the packet’s data The message handler uses the packet headers to ensure the message is re-assembled in the correct order, and no parts are missing. However, the implementation of this function introduces some bugs which can be exploitable. CVE-2020-0609 The packet handler's bounds checking. memcpy_s copies each fragment to an offset within the reassembly buffer, which is allocated on the heap. The offset for each fragment is calculated by multiplying the fragment id by 1000. However, the bounds checking does not take the offset into account. Let’s assume buffer_size is 1000, and we send a message with 2 fragments. The 1st fragment (fragment_id=0) has a length of 1. this->bytes_written is 0, so the bounds check passes. 1 byte is written to the buffer at offset 0, and bytes_written is incremented by 1. The 2nd fragment (fragment_id=1) has length of 998. this->bytes_written is 1, and 1 + 998 is still smaller than 1000, so the bounds check passes. 998 bytes are written to the buffer at offset 1000 (fragment_id*1000), which results in writing 998 bytes past the end of the buffer. Something to note is that packets don’t have to be sent in order (remember, it’s UDP). So if the first packet we send has fragment_id=65535 (the maximum), it will be written to offset 65535*1000, a full 65534000 bytes past the end of the buffer. By manipulating the fragment_id, it’s possible to write up to 999 bytes anywhere between 1 and 65534000 after the end of the buffer. This vulnerability is much more flexible than a typical linear heap overflow. It allows us to not only control the size of the data written, but the offset to where it’s written. With the extra control, it’s easier to do more precise writes, avoiding unnecessarily data corruption. CVE-2020-0610 The packet handler's tracking of which fragments have been received. The class object maintains an array of 32-bit unsigned integers (one for each fragment). Once a fragment has been received, the corresponding array entry is set from 0 to 1. Once every element is set to 1, the message re-assembly is complete and the message can be processed. The array only has space for up to 64 entries, but the fragment ID can be between 0 and 65535. The only verification is that fragment_id is less than num_fragments (which can also be set to 65535). Therefore, setting the fragment_id to any value between 65 and 65535 will allow us to write a 1 (TRUE) outside the bounds of the array. Whilst being able to set a single value to 1 may seem implausible to turn into an RCE, even the tiniest modifications can have a huge impact on program behavior. Mitigations If for whatever reason you are unable to install the patch, it is still possible to prevent exploitation of these vulnerabilities. RDG supports the HTTP, HTTPS, and UDP protocols, but the vulnerabilities only exist in the code responsible for handling UDP. Simply disabling UDP Transport, or firewalling the UDP port (usually port 3391) is sufficient to prevent exploitation. Remote Desktop Gateway Settings Future work and detection In our efforts to improve detection capabilities, some of our research includes passive and active data capabilities for scanning for vulnerabilities like CVE-2020-0609 and CVE-2020-0610. As part of our platformization of threat intelligence, we have begun adding vulnerability information to Telltale, allowing organizations to determine if they are at risk. Sursa: https://www.kryptoslogic.com/blog/2020/01/rdp-to-rce-when-fragmentation-goes-wrong/
-
Mimidrv In Depth: Exploring Mimikatz’s Kernel Driver Matt Hand Follow Jan 13 · 29 min read Mimikatz provides the opportunity to leverage kernel mode functions through the included driver, Mimidrv. Mimidrv is a signed Windows Driver Model (WDM) kernel mode software driver meant to be used with the standard Mimikatz executable by prefixing relevant commands with an exclamation point (!). Mimidrv is undocumented and relatively underutilized, but provides a very interesting look into what we can do while operating at ring 0. The goals of this post is to familiarize operators with the capability that Mimidrv provides, put forth some documentation to be used as a reference, introduce those who haven’t had much time working with the kernel to some core concepts, and provide defensive recommendations for mitigating driver-based threats. Why use Mimidrv? Simply put, the kernel is king. There are some Windows functionalities available that can’t be called from user mode, such as modifying running processes’ attributes and interacting directly with other loaded drivers. As we will delve into a later in this post, the driver provides us with a method to call these functions via a user mode application. Loading Mimidrv The first step in using Mimikatz’s driver is to issue the command !+. This command implants and starts the driver from user mode and requires that your current token has SeLoadDriverPrivilege assigned. Mimikatz first checks if the driver exists in the current working directory, and if it finds the driver on disk, it begins creating the service. Service creation is done via the Service Control Manager (SCM) API functions. Specifically, advapi32!ServiceCreate is used to register the service with the following attributes: If the service is created successfully, the “Everyone” group is granted access to the service, allowing any user on the system to interact with the service. For example, a low-privilege user can stop the service. Note: This is one of the reasons that post-op clean up is so important. Don’t forget to remove the driver (!-) when you are done so that you don’t leave it implanted for someone else to use. If that completes successfully, the service is finally started with a call to StartService. Post-Load Actions Once the service starts, it is Mimidrv’s turn to complete its setup. The driver does not do anything atypical during its startup process, but it may seem complicated you haven’t developed WDM drivers before. Every driver must have a defined DriverEntry function that is called as soon as the driver is loaded and is used to set up the requirements for the driver to run. You can think of this similarly to a main() function in user mode code. In Mimidrv’s DriverEntry function, there are four main things that happen. 1. Create the Device Object Clients do not talk directly to drivers, but rather device objects. Kernel mode drivers must create at least 1 device object, however this device object still can’t be accessed directly by user mode code without a symbolic link. We’ll cover the symbolic link a little later, but the creation of the device object must occur first. To create the device object, a call to nt!IoCreateDevice is made with some important details. Most notable of this is the third parameter, DeviceName. This is set in globals.h as “mimidrv”. This newly created device object can be seen with WinObj. 2. Set the DispatchDeviceControl and Unload Functions If that device object creation succeeds, it defines its DispatchDeviceControl function, registered at the IRP_MJ_DEVICE_CONTROL index in its MajorFunction dispatch table, as the MimiDispatchDeviceControl function. What this means is that any time it receives a IRP_MJ_DEVICE_CONTROL request, such as from kernel32!DeviceIoControl, Mimidrv will call its internal MimiDispatchDeviceControl function which will process the request. We will cover how this works in the “User Mode Interaction via MimiDispatchDeviceControl” section. Just as every driver must specify a DriveryEntry function, it must define a corresponding Unload function that is executed when the driver is unloaded. Mimidrv’s DriverUnload function is about as simple as it gets and its only job is to delete the symbolic link and then device object. 3. Create the Symbolic Link As mentioned earlier, if a driver wants to allow user mode code to interact with it, it must create a symbolic link. This symbolic link will be used by user mode applications, such as through calls to nt!CreateFile and kernel32!DeviceIoControl, in place of a “normal” file to send data to and receive data from the driver. To create the symbolic link, Mimidrv makes a call to nt!IoCreateSymbolicLink with the name of the symbolic link and the device object as arguments. The newly created device object and associated symlink can be seen in WinObj: 4. Initialize Aux_klib Finally, it initializes the Aux_klib library using AuxKlibInitialize, which must be done before being able to call any function in that library (more on that in the “Modules” section). User Mode Interaction via MimiDispatchDeviceControl After initialization, a driver’s job is simply to handle requests to it. It does this through a partially opaque feature called I/O request packets (IRPs).These IRPs contain I/O Control Codes (IOCTLs) which are mapped to function codes. These typically start at 0x8000, but Mimikatz starts at 0x000, against Microsoft’s recommendation. Mimikatz currently defines 23 IOCTLs in ioctl.h. Each one of these IOCTLs is mapped to a function. When Mimidrv receives one of these 23 defined IOCTLs, it calls the mapped function. This is where the core functionality of Mimidrv lies. Sending IRPs In order to get the driver to execute one of the functions mapped to the IOCTLs, we have to send an IRP from user mode via the symbolic link created earlier. Mimikatz handles this in the kuhl_m_kernel_do function, which trickles down to a call to nt!CreateFile to get a handle on the device object and kernel32!DeviceIoControl to sent the IRP. This hits the IRP_MJ_DEVICE_CONTROL major function, which was defined as MimiDispatchDeviceControl, and walks down the list of internally defined functions by their IOCTL codes. When a command is entered with the prefix “!”, it checks the KUHL_K_C structure, kuhl_k_c_kernel, to get the IOCTL associated with the command. The structure is defined as: In the struct, 19 commands are defined as: Despite there being 23 IOCTLs, there are only 19 commands available via Mimikatz. This is because 4 of the functions related to interacting with virtual memory are not mapped to commands. The IOCTLs and associated functions are: IOCTL_MIMIDRV_VM_READ→kkll_m_memory_vm_read IOCTL_MIMIDRV_VM_WRITE→kkll_m_memory_vm_write IOCTL_MIMIDRV_VM_ALLOC→kkll_m_memory_vm_alloc IOCTL_MIMIDRV_VM_FREE→kkll_m_memory_vm_free Driver Function Internals The commands can be broken down into 7 groups— General, Process, Notify, Modules, Filters, Memory, and SSDT. These are, for the most part (minus the General functions), logically organized in the Mimidrv source code with file name format kkll_m_<group>.c. General !ping The ping command can be used to test the ability to write data to and receive data from Mimidrv. This is done through Benjamin’s kprintf function, which is really just a simplified call to nt!RtlStringCbPrintfExW which allows the use of the KIWI_BUFFER structure to keep the code tidy. !bsod As alluded to by the name, this functionality bluescreens the box. This is done via a call to KeBugCheck with a bugcheck code of MANUALLY_INITIATED_CRASH, which will be shown on the bluescreen under the “stop code”. !sysenvset & !sysenvdel The !sysenvset command sets a system environment variable, but not in the traditional sense (e.g. modifying %PATH%). Instead, on systems configured with Secure Boot, it modifies a variable in the UEFI firmware store, specifically Kernel_Lsa_Ppl_Config, which is associated with the RunAsPPL value in the registry. The GUID that it writes this value to, 77fa9abd-0359–4d32-bd60–28f4e78f784b, is the Protected Store which Windows can use to store values that it wants to protect from user and admin modification. This effectively overrides the registry, so even if you were to modify the RunAsPPL key and reboot, LSASS would still be protected. The !sysenvdel does the opposite and removes this environment variable. The RunAsPPL registry key could then be deleted, the system rebooted, and then we could get a handle on LSASS. Process The first group of modules we’ll really dig into is the Process group, which allows for interaction and modification of user mode processes. Because we will be working with processes in this section, it is important to understand what they look like from the kernel’s perspective. Processes in the kernel center around the EPROCESS structure, an opaque structure that serves as the object for a process. Inside of the structure are all of the attributes of a process that we are familiar with, such as the process ID, token information, and process environment block (PEB). EPROCESS structures in the kernel are connected through a circular doubly-linked list. The list head is stored in the kernel variable PsActiveProcessHead and is used as the “beginning” of the list. Each EPROCESS structure contains a member, ActiveProcessLinks, of the type LIST_ENTRY. The LIST_ENTRY structure has 2 components — a forward link (Flink) and a backward link (Blink). The Flink points to the Flink of the next EPROCESS structure in the list. The Blink points to the Flink of the previous EPROCESS structure in the list. The Flink of the last structure in the list points to the Flink of PsActiveProcessHead. This creates a loop of EPROCESS structures and is represented in this simplified graphic. !process The first module gives us a list of processes running on the system, along with some additional information about them. This works by walking the linked list described earlier using 2 Windows version-specific offsets — EprocessNext and EprocessFlags2. EprocessNext is the offset in the current EPROCESS structure containing the address of the ActiveProcessLinks member, where the Flink to the next process can be read (e.g. 0x02f0 in Windows 10 1903). EProcessFlags2 is a second set of ULONG bitfields introduced in Windows Vista, hence why this is only shown when running on systems Vista and above, used to give use some more detail. Specifically: PrimaryTokenFrozen — Uses a ternary to return “F-Tok” if the primary token is frozen and nothing if it isn’t. If PrimaryTokenFrozen is not set, we can swap in our token such as in the case of suspended processes. In a vast majority of cases, you will find that the primary token is frozen. SignatureProtect — This is actually 2 values - SignatureLevel and SectionSignatureLevel. SignatureLevel defines the signature requirements of the primary module. SectionSignatureLevel defines the minimum signature level requirements of a DLL to be loaded into the process. Protection — These 3 values, Type, Audit, and Signer, are members of the PS_PROTECTION structure which represent the process’ protection status. Most important of these is Type, which maps to the following statuses, which you may recognize as PP/PPL: !processProtect The !processProtect function is one of, if not the most, used functionalities supplied by Mimidrv. Its objective is to add or remove process protection from a process, most commonly LSASS. The way it goes about modifying the protection status is relatively simple: Use nt!PsLookupProcessByProcessId to get a handle on a process’ EPROCESS structure by its PID. Go to the version-specific offset of SignatureProtect in the EPROCESS structure. Patches 5 values — SignatureLevel, SectionSignatureLevel, Type, Audit, and Signer (the last 3 being members of the PS_PROTECTION struct) — depending on whether or not it is protecting or unprotecting the process. If protecting, the values will be 0x3f, 0x3f, 2, 0, 6, representing a protected signer of WinTcb and protection level of Max. If unprotecting, the values will be 0, 0, 0, 0, 0, representing an unprotected process. Finally, dereference the EPROCESS object. This module is particularly relevant for us as attackers because most obviously we can remove protection from LSASS in order to extract credentials, but more interestingly we can protect an arbitrary process and use that to get a handle on another protected process. For example, we use !processProtect to protect our running mimikatz.exe and then run some command to extract credentials from LSASS and it should work despite LSASS being protected. An example of this use case is shown below. !processToken Continuing with another operationally-relevant function is !processToken which can be used to duplicate a process token and pass it to an attacker-specified process. This is most commonly used during DCShadow attacks and is similar to token::elevate, but modifies the process token instead of the thread token. With no arguments passed, this function will grant all cmd.exe, powershell.exe, and mimikatz.exe processes a NT AUTHORITY\SYSTEM token. Alternatively, it takes “to” and “from” parameters which can be used to define the process you wish to copy the token from and process you want to copy it to. To duplicate the token, Mimikatz first sets the “to” and “from” PIDs to the user-supplied values, or “0” if not set, and then places them in a MIMIDRV_PROCESS_TOKEN_FROM_TO struct, which sent to Mimidrv via IOCTL_MIMIDRV_PROCESS_TOKEN. Once Mimidrv receives the PIDs specified by the user, it gets handles on the “to” and “from” processes using nt!PsLookupProcessByProcessId. If it was able to get a handle on those processes, it uses nt!ObOpenObjectByPointer to get a kernel handle (OBJ_KERNEL_HANDLE) on the “from” process. This is required by the following call to nt!ZwOpenProcessTokenEx, which will return a handle on the “from” process’ token. At this point, the logic forks somewhat. In the first case where the user has supplied their own “to” process, Mimidrv calls kkll_m_process_token_toProcess. This function first uses nt!ObOpenObjectByPointer to get a kernel handle on the “to” process. Then it calls ZwDuplicateToken to get the token from the “from” process and stash it in an undocumented PROCESS_ACCESS_TOKEN struct as the Token attribute. If the system is running Windows Vista or above, it sets PrimaryTokenFrozen (described in the !process section) and then calls the undocumented nt!ZwSetInformationProcess function to do the actual work of giving the duplicated token to the “to” process. Once that completes, it cleans up by closing the handles to the “to” process and PROCESS_ACCESS_TOKEN struct. In the event that no “to” process was specified, Mimidrv leverages the kkll_m_process_enum function used in !process to walk the list of processes on the system. Instead of using the kkll_m_process_list_callback callback, it uses kkll_m_process_systoken_callback, which uses ntdll!RtlCompareMemory to check if the ImageFileName matches “mimikatz.exe”, “cmd.exe”, or “powershell.exe”. If it does, it passes a handle to that process to kkll_m_process_token_toProcess and the functionality described in the paragraph before this is used to grant a duplicated token to that process, and then it continues walking the linked list looking for other matches. !processPrivilege This is a relatively simple function that grants all privileges (e.g. SeDebugPrivilege, SeLoadDriverPrivilege), but includes some interesting code that highlights the power of operating in ring 0. Before we jump into exactly how Mimidrv modifies the target process token, it is important to understand what a token looks like in the kernel. As discussed earlier, the EPROCESS structure contains attributes of a process, including the token (offset 0x360 in Windows 10 1903). You may notice that the token of the type EX_FAST_REF rather than TOKEN. This is some internal Windows weirdness, but these pointers are built around that fact that that kernel structures are aligned on a 16-byte boundary on x64 systems. Due to this alignment, spare bits in the pointer are available to be used for reference counting. Where this becomes relevant for us is that the last 1 byte of the pointer will be the reference to our object — in this case a pointer to the TOKEN structure. To demonstrate this practically, let’s hunt down the token of the System process in WinDbg. First, we get the address of the EPROCESS structure for the process. Because we know that the token EX_FAST_REF will be at offset 0x360, we can use WinDbg’s calculator to do some quick math and give us the memory address at the result of the equation. Now that we have the address of the EX_FAST_REF, we can change the last byte to 0 to get the address of our TOKEN structure, which we’ll examine with the !token extension. So now that we can identify the TOKEN structure, we can examine some of its attributes. Most relevant to !processPrivileges is the Privileges attribute (offset 0x40 on Vista and above). This attribute is of the type SEP_TOKEN_PRIVILEGES which contains 3 attributes — Present, Enabled, and EnabledByDefault. These are bitmasks representing the token permissions we are used to seeing (SeDebugPrivilege, SeLoadDriverPrivilege, etc.). If we examine the function called by Mimidrv when we issue the !processPrivileges command, we can see that these bitmasks are being overwritten to enable all privileges on the primary token of the target process. Here’s what the result looks like in the GUI. And here it is in the debugger while inspecting the memory at the Privileges offset. To sum this module up, !processPrivileges overwrites a specific bitmask in a target process’ TOKEN structure which grants all permissions to the target process. Notify The kernel provides ways for drivers to “subscribe” to specific events that happen on a system by registering callback functions to be executed when the specific event happens. Common examples of this are shutdown handlers, which allow the driver to perform some action when the system is shutting down (often for persistence), and process creation notifications, which let the driver know whenever a new process is started on the system (commonly used by EDRs). These modules allow us to find drivers that subscribe to specific event notifications and where their callback function is located. The code Mimidrv uses to do this is a bit hard to read, but the general flow is: Search for a string of bytes, specifically the opcodes directly after a LEA instruction containing the pointer to a structure in system memory. Work with the structure (or pointers to structures) at the address passed in the LEA instruction to find the address of the callback functions. Return some details about the function, such as the driver that it belongs to. !notifProcess A driver can opt to receive notifications when a process is created or destroyed by using nt!PsSetCreateProcessNotifyRoutine(Ex/Ex2) with a callback function specified in the first parameter. When a process is created, a process object for the newly created process is returned along with a PS_CREATE_NOTIFY_INFO structure, which contains a ton of relevant information about the newly created process, including its parent process ID and command line arguments. A simple implementation of process notifications can be found here. This type of notification has some advantages over Event Tracing for Windows (ETW), namely that there is no delay in receiving the creation/termination notifications and because the process object is passed to our driver, we have a way to prevent the process from starting during a pre-operation callback. Seems pretty useful for an EDR product, eh? We first begin by searching for the pattern of bytes (opcodes starting at LEA RCX,[RBX*8] in the screenshot below) between the addresses of nt!PsSetCreateProcessNotifyRoutine and nt!IoCreateDriver which marks the start of the undocumented nt!PspSetCreateProcessNotifyRoutine array. At the address of nt!PspSetCreateProcessNotifyRoute is an array of ≤64 pointers to EX_FAST_REF structures. When a process is created/terminated, nt!PspCallProcessNotifyRoutines walks through this array and calls all of the callbacks registered by drivers on the system. In this array, we will work with the 3rd item (0xffff9409c37c7e6f). The last 4 bits of these pointer addresses are insignificant, so they are removed which gives us the address of the EX_CALLBACK_ROUTINE_BLOCK. The EX_CALLBACK_ROUTINE_BLOCK structure is undocumented, but thanks to the folks over at ReactOS, we have it defined here as: The first 8 bytes of the structure represent an EX_RUNDOWN_REF structure, so we can jump past them to get the address of the callback function inside of a driver. We then take that address and see which module is loaded at that address. And there we can see that this is the address of the process notification callback for WdFilter.sys, Defender’s driver! Could we write a RET instruction at this address to neuter this functionality in the driver? 😈 !notifThread The !notifThread command is nearly identical to the !notifProcess command, but it searches for the address of nt!PspCreateThreadNotifyRoutine to find the pointers to the thread notification callback functions instead of nt!PspCreateProcessNotifyRoutine. !notifImage These notifications allow a driver to receive and event whenever an image (e.g. driver, DLL, EXE) is mapped into memory. Just as in the function above, !notifImage simply changes the array it is searching for to nt!PspLoadImageNotifyRoutine in order to locate the pointers to image load notification callback routines. From there it follows the exact same process of bitshifting to get the address of the callback function. !notifReg A driver can register pre- and post-operation callbacks for registry events, such as when a key is read, created, or modified, using nt!CmRegisterCallback(Ex). While this functionality isn’t as common as the types we discussed previously, it gives developers a way to prevent the modification of protected registry keys. This module is simpler than the previous 3 in that it really centers around finding and working with a single undocumented structure. Mimidrv searches for the address to nt!CallbackListHead, which is a doubly-linked list that contains the pointer to the address of the registry notification callback routine. This structure can be documented as: At the offset 0x28 in this structure is the address of the registered callback routine. Mimidrv simply iterates through the linked list getting the callback function addresses and passing them to kkll_m_modules_fromAddr to get the offset of the function in its driver. !notifObject Note: This command is not working in release 2.2.0 2019122 against Win10 1903 and returns 0x490 (ERROR_NOT_FOUND) when calling kernel32!DeviceIoControl, likely due to not being able to find the address of nt!ObTypeDirectoryObject. I will update this section if it is modified and working again. Finally, a driver can register a callback to receive notifications when there are attempts to open or duplicate handles to processes, threads, or desktops, such as in the event of token stealing. This is useful for many different types of software, and is used by AVG’s driver to protect its user mode processes from being debugged. These callbacks can be either pre-operation or post-operation. Pre-operation callbacks allow the driver to modify the requested handle, such as the requested access, before the operation which returns a handle is complete. A post-operation callback allows the driver to perform some action after the operation has completed. Mimidrv first searches for the address of nt!ObpTypeDirectoryObject, which holds a pointer to the OBJECT_DIRECTORY structure. The “HashBuckets” member of this structure is a linked list of OBJECT_DIRECTORY_ENTRY structures, each containing an object value at offset 0x8. Each of these Objects are OBJECT_TYPE structures containing details about the specific type of object (processes, tokens, etc.) which are more easily viewed with WinDbg’s !object extension. The Hash number is the index in the HashBucket above. Mimidrv then extracts the Name member from the OBJECT_TYPE structure. The other member of note is CallbackList, which defines a list of pre- and post-operation callbacks which have been registered by nt!ObRegisterCallbacks. It is a LIST_ENTRY structure that points to the undocumented CALLBACK_ENTRY_ITEM structure. Mimidrv iterates through the linked list of CALLBACK_ENTRY_ITEM structures, passing each one to kkll_m_notify_desc_object_callback where the pointer from the pre-/post-operation callback is extracted and passed to kkll_m_modules_fromAddr in order to find the offset in the driver that the callback belongs to. Finally, Mimidrv loops through an array of 8 object methods starting from the OBJECT_TYPE + 0x70. If a pointer is set, Mimidrv passes it to kkll_m_modules_fromAddr to get the address of the object method and returns it to the user. This can be seen in the example below for the Process object type. Object Method Pointers for the Process Object Type While this function is not working on the latest release of Windows 10, the output would be similar to this: Source: https://www.slideshare.net/ASF-WS/asfws-2014-rump-session Modules While this section only contains 1 command, it also contains another core kernel concept — memory pools. Memory pools are kernel objects that allow chunks of memory to be allocated from a designated memory region, either paged or nonpaged. Each of these types has a specific use case. The paged pool is virtual memory that can be paged in/out (i.e. read/written) to the page file on disk, C:\pagefile.sys). This is the recommended pool for drivers to use. The nonpaged pool can’t be paged out and will always live in RAM. This is required in specific situations where page faults can’t be tolerated, such as when processing Interrupt Service Routines (ISRs) and during Deferred Procedure Calls (DPCs). Here’s an example of a standard allocation of paged pool memory: The last item to note is the third and final parameter of nt!ExAllocatePoolWithTag, the pool tag. This is typically a unique 4-byte ASCII value and is used to help track down drivers with memory leaks. In the example above, the memory would be tagged with “MATT” (the tag is little endian). Mimidrv uses the pool tag “kiwi”, which would be shown as “iwik”, as seen in Pavel Yosifovich’s PoolMonX below. !modules The !modules command lists details about drivers loaded on the system. This command primarily centers around the aux_klib!AuxKlibQueryModuleInformation function. Mimidrv first uses aux_klib!AuxKlibQueryModuleInformation to get the total amount of memory it will need to allocate in order to hold the AUX_MODULE_EXTENDED_INFO structs containing the module information. Once it receives that, it will use nt!ExAllocatePoolWithTag to allocate the required amount of memory from the paged pool using its pool tag, “kiwi”. Some quick math happens to determine the number of images loaded by dividing the size returned by the first call to aux_klib!AuxKlibQueryModuleInformation by the size of the AUX_MODULE_EXTENDED_INFO struct. A subsequent call to aux_klib!AuxKlibQueryModuleInformation is made to get all of the module information and store it for processing. Mimidrv then iterates through this pool of memory using the callback function kkll_m_modules_list_callback to copy the base address, image size, and file name into the output buffer which will be sent back to the user. Filters While we have primarily been exploring software drivers, there are 2 other types, filters and minifilters, that Mimidrv allows use to interact with. Filter drivers are considered legacy but are still supported. There are many types of filter drivers, but they all serve to expand the functionality of devices by filtering IRPs. Different subclasses of filter drivers exist to serve specific jobs, such as file system filter drives and network filter drivers. Example of a file system filter driver would be an antivirus engine, backup agent, or an encryption agent. The most common filter driver you will see is FltMgr.sys, which exposes functionality required by filesystem filters so that developers can more easily develop minifilter drivers. Minifilter drivers are Microsoft’s recommendation for filter driver development and include some distinct advantages, including being able to be unloaded without a reboot and reduced code complexity. These types of drivers are more common than legacy filter drivers and can be listed/managed with fltmc.exe. The biggest difference between these 2 types in the context of Mimidrv is that minifilter drivers are managed via the Filter Manager APIs. !filters The !filters command works almost exactly the same as the !modules command, but instead leverages nt!IoEnumerateRegisteredFiltersList to get a list of registered filesystem filter drivers on the system, stores them in a DRIVER_OBJECT struct, and prints out the index of the driver as well as the DriverName member. !minifilters The !minifilters command displays the minifilter drivers registered on the system. This function is a little tough to read, but that’s because the functions Mimidrv needs to call have memory requirements that aren’t known at runtime, so it makes a request solely to get the amount of memory required, allocates that memory, and then makes the real request. To help understand what is going on, it is helpful to break down each step by primary function. FltEnumerateFilters — The first call is to fltmgr!FltEnumerateFilters, which enumerates all registered minifilter drivers on the system and return a list of pointers. FltGetFilterInformation — Next, we iterate over this list of pointers, calling fltmgr!FltGetFilterInformation to get a FILTER_FULL_INFORMATION structure back, containing details about each of the minifilters. FltEnumerateInstances — For each of the minifilters, fltmgr!FltEnumerateInstances is used to get a list of instance pointers. FltGetVolumeFromInstance — Next, fltmgr!FltGetVolumeFromInstance is used to return the volume each minifilter is attached to (e.g. \Device\HarddiskVolume4). Note that minifilters can have multiple instances attached to different volumes. Get details about pre- and post-operation callbacks — We’ll dig into this next. FltObjectDereference — When all instances have been iterated through, fltmgr!FltObjectDereference is used to deference each instance and the list of minifilters. As you can see, Mimidrv makes use of some pretty standard Filter Manager API functions. However, step 5 is a bit odd in that it gets information about the minifilter using hardcoded offsets and makes calls to kkll_m_modules_fromAddr to get offsets without much indiction of what we are looking at. In the output of !minifilters, there are addresses of PreCallback and/or PostCallback, but what are these? Minifilter drivers may register up to 1 pre-operation callback and up to 1 post-operation callback for each operation that it needs to filter. When the Filter Manager processes an I/O operation, it passes the request down the driver stack starting with the minifilter with the highest altitude that has registered a pre-operation callback. This is the minifilter’s opportunity to act on the I/O operation before it is passed to the file system for completion. After the I/O operation is complete, the Filter Manager again passes down the driver stack for drivers with registered post-operation callbacks. Within these callbacks, the drivers can interact with the data, such as examining it or modifying it. In order to understand what Mimidrv is parsing out, lets dig into an example from the output of !minifilters on my system, specifically for the Named Pipe Service Triggers driver, npsvctrig.sys. We’ll crack open WinDbg and first look for our registered filters. Here we can see an instance of npsvctrig at address 0xffffc18f97e34cb0. Inspecting the FLT_INSTANCE structure at this address shows the member CallbackNodes at offset 0x0a0. There are 3 CALLBACK_NODE structures (screenshot snipped for viewing). Inspecting the first CALLBACK_NODE structure at 0xffffc18f97e34d50, we can see the PostOperation attribute (offset 0x20) has an address of 0xfffff8047e5f6010, the same that was shown in Mimikatz for “CLOSE”, which correlates to IRP_MJ_CLOSE. That means that this is a pointer to the post-operation callback’s address! But what about the offset inside the driver show in the output? To get this for us, Mimidrv calls kkll_m_modules_fromAddr, which in turn calls kkll_m_modules_enum, which we walked through in the “Modules” section, but this time with a callback function of kkll_m_modules_fromAddr_callback. This callback returns the address of the callback, the filename of the driver excluding the path, and the offset of the address we provided from the image’s base address. If we take a quick look at the offset 0x6010 inside of npsvctrig.sys, we can see that it is the start of its NptrigPostCreateCallback function. Memory These functions, while not implemented as commands available to the user, allow interaction with kernel memory and expose some interesting nuances to consider when working with memory in the kernel. These could be called by Mimikatz as they have correlating IOCTLs, so it is worth walking through what they do. kkll_m_memory_vm_read If the name didn’t give it away, this function could be used to read memory in the kernel. It is a very simple function but introduces 2 concepts we haven’t explored yet — Memory Descriptor Lists (MDLs) and page locking. Virtual memory should be contiguous, but physical memory can be all over the place. Windows uses MDLs to describe the physical page layout for a virtual memory buffer which helps in describing and mapping memory properly. In some cases we may need to access data quickly and directly and we don’t want the memory manager messing with that data (e.g. paging it to disk). To make sure that this doesn’t happen, we can use nt!MmProbeAndLockPages to lock the physical pages mapped by the virtual pages in memory temporarily so they can’t be paged out. This function requires that an operation be specified when called which describes what will be done. These can be either IoReadAccess, IoWriteAccess, or IoModifyAccess. After the operation completes, nt!MmUnlockPages is used to unlock the pages. The 2 concepts make up most of kkll_m_memory_vm_read. A MDL is allocated using nt!IoAllocateMdl, pages are locked with the nt!IoReadAccess specified, nt!RtlCopyMemory is used to copy memory from the MDL to the output buffer, and then the pages are unlocked with a call to nt!MmUnlockPages. This allows us to read arbitrary memory from the kernel. kkll_m_memory_vm_write This function is a mirror image of kkll_m_memory_vm_read, but the Dest and From parameters are switched as we are writing to an address described by the MDL as opposed to reading from it. kkll_m_memory_vm_alloc The kkll_m_memory_vm_alloc function allows for allocation of arbitrarily-sized memory from the non-paged pool by calling nt!ExAllocatePoolWithTag. and returns a pointer to the address where memory was allocated. This could be used in place of some of the direct calls to nt!ExAllocatePoolWithTag in Mimidrv as it implements error checking which could make the code a little more stable and easier to read. kkll_m_memory_vm_free As with all other types of memory, non-paged pool memory must be freed. The kkll_m_memory_vm_free function does just that with a call to nt!ExFreePoolWithTag. Like the function above, this could be used in place of direct calls to nt!ExFreePoolWithTag, but isn’t currently being used by Mimidrv. SSDT When a user mode application needs to create a file by using kernel32!CreateFile, how is the disk accessed and storage allocated for the user? Accessing system resources is a function of the kernel but these resources are needed by user mode applications, so there needs to be a way to make requests to the kernel. Windows makes use of system calls, or syscalls, to make this possible. Under the hood, here’s a rough view of what kernel32!CreateFile is actually doing: Right at the boundary between user mode and kernel mode, you can see a call to sysenter (this could also be substituted for syscall depending on the processor), which is used to transfer from user mode to kernel mode. This instruction takes a number, specifically a system service number, in the EAX register which determines which system call to make. @j00ru maintains a list of Windows syscalls and their service numbers on his blog. In our kernel32!CreateFile example, ntdll!NtCreateFile places 0x55 into EAX before the SYSCALL instruction. On the SYSCALL, KiSystemService in ring 0 receives the request and looks up the system service function in the System Service Descriptor Table (SSDT), KeServiceDescriptorTable. The SSDT holds pointers to kernel functions, and in this case we are looking for nt!NtCreateFile. In the past, rootkits would hook the SSDT and replace the pointer to kernel functions so that when system services were called, a function inside of their rootkit would be executed instead. Thankfully, Kernel Patch Protection (KPP/PatchGuard) protects critical kernel structures, such as the SSDT, from modification so this technique does not work on modern x64 systems. !ssdt The !ssdt command locates the KeServiceDescriptorTable in memory by searching for an OS version-specific pattern (0xd3, 0x41, 0x3b, 0x44, 0x3a, 0x10, 0x0f, 0x83 in Windows 10 1803+) which marks the pointer to the KeServiceDescriptorTable structure. Inside of the KeServiceDescriptorTable structure is a pointer to another structure, KiServiceTable, which contains an array of 32-bit offsets relative to KiServiceTable itself. Because we can’t really work with these offsets in WinDbg as they are left-shifted 4 bits, we can right-shift it by 4 bits and add it to KiServiceTable to get the correct address. We can also use some of WinDbg’s more advanced features to process the offsets and print out the module located at the calculated addresses to get the addresses of all services. This is the exact same thing the Mimidrv is doing after locating KeServiceDescriptorTable in order to locate pointers to services. If first prints out the index (e.g. 85 for NtCreateFile as shown in the earlier WinDbg screenshot) followed by the address. Then kkll_m_modules_fromAddr, which you’ll remember from earlier sections, is called to get the offset of the service/function inside of ntoskrnl.exe. Using the indexes provided by WinDbg, we can see the the address at index 0 points to nt!NtAccessCheck. which resides at offset 0x112340 in ntoskrnl.exe. Defending Against Driver-Based Threats Now that we’ve covered the inner workings of Mimidrv, how do we prevent the bad guys from getting in implanted on our systems in the first place? Using drivers against Windows 10 systems introduces some unique challenges for us as attackers, the largest of which being that drivers must be signed. Mimidrv has many static indicators that are easily modifiable, but require recompilation and re-signing using a new EV certificate. Because of the cost that comes with modifying Mimidrv, a brittle detection is still worth implementing. A few of the default indicators for Mimidrv implantation and organized by source are: Windows Event ID 7045/4697 — Service Creation Service Name: “mimikatz driver (mimidrv)” Service File Name: *\mimidrv.sys Service Type: kernel mode driver (0x1) Service Start Type: auto start (2) Note: Event ID 4697 contains information about the account that loaded the driver, which could aide in hunting. Audit Security System Extension must be configured via Group Policy for this event to be generated. Sysmon Event ID 11 — File Creation TargetFilename: *\mimidrv.sys Sysmon Event ID 6 — Driver Loaded ImageLoaded: *\mimidrv.sys SignatureStatus: Expired Another more broad approach to this problem is to step back even further and looks at the attributes of unwanted drivers as a whole. Third-party drivers are an inevitability for most organizations, but knowing what the standard is for your fleet and identifying anomalies is a worthwhile exercise. Windows Defender Application Control (WDAC) makes this incredibly simple to audit on Windows 10 systems. My colleague Matt Graeber wrote an excellent post on deploying a Code Integrity Policy and beginning to audit the loading of any non-Windows, Early Load AntiMalware (ELAM), or Hardware Abstraction Layer (HAL) drivers. After a reboot, the system will begin generating logs with Event ID 3076 for any driver that would have been blocked with the base policy. From here, we can begin to figure out which drivers are needed outside of the base policy, grant exemptions for them, and begin tuning detection logic to allow analysts to triage anomalous driver loads more efficiently. Further Reading If you have found this material interesting, here are some resources that cover some of the details that I glossed over in this post: Windows Kernel Programming by Pavel Yosifovich Windows Internals, Part 1 by Pavel Yosifovich, Mark Russinovich, David Solomon, and Alex Ionescu Practical Reverse Engineering: x86, x64, ARM, Windows Kernel, Reversing Tools, and Obfuscation, Chapter 3 by Bruce Dang, Alexandre Gazet, Elias Bachaalany, and Sébastien Josse OSR’s The NT Insider publication and community forum Microsoft’s sample WDM drivers Broderick Aquilino’s thesis Relevance of Security Features Introduced in Modern Windows OS Geoff Chappell’s Windows kernel documentation Posts By SpecterOps Team Members Posts from SpecterOps team members on various topics relating information security Follow Thanks to Matt Graeber. Mimikatz Drivers Wdm Windows Internals Kernel Written by Matt Hand Follow I like red teaming, picking up heavy things, and burritos. Adversary Simulation @ SpecterOps. github.com/matterpreter Sursa: https://posts.specterops.io/mimidrv-in-depth-4d273d19e148
-
This talk sheds some light into the intermediate language that is used inside the Hex-Rays Decompiler. The microcode is simple yet powerful to represent real world programs. By Ilfak Guilfanov Full abstract and materials: https://www.blackhat.com/us-18/briefi...
-
Microsoft Introduces Free Source Code Analyzer By Ionut Arghire on January 17, 2020 Microsoft this week announced a new source code analyzer designed to identify interesting characteristics of code. Called Microsoft Application Inspector, the new tool doesn’t focus on discovering poor programming practices in the analyzed code. Instead, it looks for interesting features and metadata, such as cryptography, connections to remote resources, and the underlying platform. The need for such a source code analyzer, the tech giant says, is rooted in the broad use of multiple components when building an application, including proprietary and open source code. Although code reuse brings a great deal of benefits, such as faster time-to-market, quality, and interoperability, it also increases risks and comes with the cost of hidden complexity, Microsoft explains. Unlike typical static analysis tools, which rather focus on identifying issues in the analyzed code, Application Inspector attempts to identify characteristics, to help determine what the software is or does. “Basically, we created Application Inspector to help us identify risky third party software components based on their specific features, but the tool is helpful in many non-security contexts as well,” Microsoft says. With the new tool, key changes in a component’s feature set over time (version to version) can be identified, as well as increased attack surface or the introduction of malicious code. The cross-platform, command-line tool can output results in multiple formats, including JSON and interactive HTML, and includes hundreds of feature detection patterns, tailored for popular programming languages, Microsoft says. Supported types of characteristics include application frameworks (development, testing); cloud / service APIs (Microsoft Azure, Amazon AWS, and Google Cloud Platform); cryptography (symmetric, asymmetric, hashing, and TLS); data types (sensitive, personally identifiable information); operating system functions (platform identification, file system, registry, and user accounts); and security features (authentication and authorization). Application Inspector was released in open source and is available for download from Microsoft’s GitHub repository. Sursa: https://www.securityweek.com/microsoft-introduces-free-source-code-analyzer
-
Introduction Windows Kernel Explorer (you can simply call it as "WKE") is a free but powerful Windows kernel research tool. It supports from Windows XP to Windows 10 (32-bit and 64-bit). Compared to WIN64AST and PCHunter, WKE can run on the latest Windows 10 without updating binary files. How WKE works on the latest Windows 10 WKE will automatically download required symbol files if the current system is not supported natively, 90% of the features will work after this step. For some needed data that doesn't exist in symbol files, WKE will try to retrieve them from the DAT file (when new Windows 10 releases, I will upload the newest DAT file to GitHub). If WKE cannot access the internet, 50% of the features will still work. Currently, native support is available from Windows XP to Windows 10 RS3, Windows 10 from RS4 to 19H2 are fully supported by parsing symbol files and DAT file. How to customize WKE You can customize WKE by editing the configuration file. Currently, you can specify the device name and symbolic link name of driver, and altitude of filter. You can also enable kernel-mode and user-mode characteristics randomization to avoid being detected by malware. If you rename the EXE file of WKE, then you need to rename SYS/DAT/INI files together with the same name. About digital signature and negative report from Anti-Virus softwares Because I don't have a digital certificate, I have to use the leaked digital certificate from HT SRL to sign drivers of WKE. I use "DSEFIX" as an alternative solution to bypass DSE, you can try to launch WKE with "WKE_dsefix.bat" if WKE loads driver unsuccessfully on your system. Signing files with the HT SRL digital certificate has a side effect: almost all anti-virus softwares infer files with HT SRL digital signature are viruses, because many hackers use it to sign malwares since 2015. Only idiots implant malicious code into a tool for experienced programmers and reverse engineers, because most users only use WKE in test environments, this kind of behavior is meaningless. About loading driver unsuccessfully If WKE prompts "unable to load driver", there may be the following reasons: Secure boot is enabled. Anti-Virus software prevents the driver from loading. Solutions: Disable secure boot. Add the files of WKE to the white list of Anti-Virus software. About open source It is a bit awkward, so I say straightforwardly: I don't plan to share the source code of this tool, but I may share some source code of test programs that associated with this tool. About WKE can be detected by anti-cheat solutions I received too much SPAM about this issue. I must declare: WKE is not designed to bypass any anti-cheat solution. If you need to use WKE in a specfic environment, please order "binary customization" service. Main Features Process management (Module, Thread, Handle, Memory, Window, Windows Hook, etc.) File management (NTFS partition analysis, low-level disk access, etc.) Registry management and HIVE file operation Kernel-mode callback, filter, timer, NDIS blocks and WFP callout functions management Kernel-mode hook scanning (MSR, EAT, IAT, CODE PATCH, SSDT, SSSDT, IDT, IRP, OBJECT) User-mode hook scanning (Kernel Callback Table, EAT, IAT, CODE PATCH) Memory editor and symbol parser (it looks like a simplified version of WINDBG) Hide driver, hide/protect process, hide/protect/redirect file or directory, protect registry and falsify registry data Path modification for driver, process and process module Enable/disable some obnoxious Windows components Screenshots In order to optimize the page load speed in low quality network environments, I only placed one picture on this page. Thanking List Team of WIN64AST (I referenced the UI design and many features of this software) Team of PCHunter (I referenced some features of this software) Team of ProcessHacker (I studied the source code of this software, but I didn't use it in my project) Author of DSEFIX (I use it as an alternative solution to load driver) Contact E-MAIL: AxtMueller#gmx.de (Replace # with @) If you find bugs, have constructive suggestions or would like to purchase a paid service, please let me know. You'd better write E-MAIL in English or German, I only reply to E-MAILs that I am interested in. Paid services: Feature customization: Add the features you need to WKE. Binary customization: Modify obvious characteristics of WKE and remove all of my personal information in WKE. Implant link: Implant link in WKE on "About" page, all users will see it when main dialog appears. Specific feature separation: Copy source code of specific feature to a separate project. Driver static library: It contains most of main features of WKE. Driver source code: Entire driver source code of WKE. Revision History Current Version: 20200107 Bug fix: Inputbox works improperly on the latest Windows 10. Revoked Versions: 00000000 These versions have serious security issues and should not be used anymore. Sursa: https://github.com/AxtMueller/Windows-Kernel-Explorer
-
CurveBall – An Unimaginative Pun but a Devastating Bug By Steve Povolny, Philippe Laulheret and Douglas McKee on Jan 17, 2020 2020 came in with a bang this year, and it wasn’t from the record-setting number of fireworks on display around the world to celebrate the new year. Instead, just over two weeks into the decade, the security world was rocked by a fix for CVE-2020-0601 introduced in Microsoft’s first patch Tuesday of the year. The bug was submitted by the National Security Administration (NSA) to Microsoft, and though initially deemed as only “important”, it didn’t take long for everyone to figure out this bug fundamentally undermines the entire concept of trust that we rely on to secure web sites and validate files. The vulnerability relies on ECC (Elliptic Curve Cryptography), which is a very common method of digitally signing certificates, including both those embedded in files as well as those used to secure web pages. It represents a mathematical combination of values that produce a public and private key for trusted exchange of information. Ignoring the intimate details for now, ECC allows us to validate that files we open or web pages we visit have been signed by a well-known and trusted authority. If that trust is broken, malicious actors can “fake” signed files and web sites and make them look to the average person as if they were still trusted or legitimately signed. The flaw lies in the Microsoft library crypt32.dll, which has two vulnerable functions. The bug is straightforward in that these functions only validate the encrypted public key value, and NOT the parameters of the ECC curve itself. What this means is that if an attacker can find the right mathematical combination of private key and the corresponding curve, they can generate the identical public key value as the trusted certificate authority, whomever that is. And since this is the only value checked by the vulnerable functions, the “malicious” or invalid parameters will be ignored, and the certificate will pass the trust check. As soon as we caught wind of the flaw, McAfee’s Advanced Threat Research team set out to create a working proof-of-concept (PoC) that would allow us to trigger the bug, and ultimately create protections across a wide range of our products to secure our customers. We were able to accomplish this in a matter of hours, and within a day or two there were the first signs of public PoCs as the vulnerability became better understood and researchers discovered the relative ease of exploitation. Let’s pause for a moment to celebrate the fact that (conspiracy theories aside) government and private sector came together to report, patch and publicly disclose a vulnerability before it was exploited in the wild. We also want to call out Microsoft’s Active Protections Program, which provided some basic details on the vulnerability allowing cyber security practitioners to get a bit of a head start on analysis. The following provides some basic technical detail and timeline of the work we did to analyze, reverse engineer and develop working exploits for the bug. This blog focuses primarily on the research efforts behind file signing certificates. For a more in-depth analysis of the web vector, please see this post. Creating the proof-of-concept The starting point for simulating an attack was to have a clear understanding of where the problem was. An attacker could forge an ECC root certificate with the same public key as a Microsoft ECC Root CA, such as the ECC Product Root Certificate Authority 2018, but with different “parameters”, and it would still be recognized as a trusted Microsoft CA. The API would use the public key to identify the certificate but fail to verify that the parameters provided matched the ones that should go with the trusted public key. There have been many instances of cryptography attacks that leveraged failure of an API to validate parameters (such as these two) and attackers exploiting this type of vulnerability. Hearing about invalid parameters should raise a red flag immediately. To minimize effort, an important initial step is to find the right level of abstraction and details we need to care about. Minimal details on the bug refer to public key and curve parameters and nothing about specific signature details, so likely reading about how to generate public/private key in Elliptical Curve (EC) cryptography and how to define a curve should be enough. The first part of this Wikipedia article defines most of what we need to know. There’s a point G that’s on the curve and is used to generate another point. To create a pair of public/private keys, we take a random number k (the private key) and multiply it by G to get the public key (Q). So, we have Q = k*G. How this works doesn’t really matter for this purpose, so long as the scalar multiplication behaves as we’d expect. The idea here is that knowing Q and G, it’s hard to recover k, but knowing k and G, it’s very easy to compute Q. Rephrasing this in the perspective of the bug, we want to find a new k’ (a new private key) with different parameters (a new curve, or maybe a new G) so that the ECC math gives the same Q back. The easiest solution is to consider a new generator G’ that is equal to our target public key (G’= Q). This way, with k’=1 (a private key equal to 1) we get k’G’ = Q which would satisfy the constraints (finding a new private key and keeping the same public key). The next step is to verify if we can actually specify a custom G’ while specifying the curve we want to use. Microsoft’s documentation is not especially clear about these details, but OpenSSL, one of the most common cryptography libraries, has a page describing how to generate EC key pairs and certificates. The following command shows the standard parameters of the P384 curve, the one used by the Microsoft ECC Root CA. Elliptic Curve Parameter Values We can see that one of the parameters is the Generator, so it seems possible to modify it. Now we need to create a new key pair with explicit parameters (so all the parameters are contained in the key file, rather than just embedding the standard name of the curve) and modify them following our hypothesis. We replace the Generator G’ by the Q from Microsoft Certificate, we replace the private key k’ by 1 and lastly, we replace the public key Q’ of the certificate we just generated by the Q of the Microsoft certificate. To make sure our modification is functional, and the modified key is a valid one, we use OpenSSL to sign a text file and successfully verify its signature. Signing a text file and verifying the signature using the modified key pair (k’=1, G’=Q, Q’=Q) From there, we followed a couple of tutorials to create a signing certificate using OpenSSL and signed custom binaries with signtool. Eventually we’re greeted with a signed executable that appeared to be signed with a valid certificate! Spoofed/Forged Certificate Seemingly Signed by Microsoft ECC Root CA Using Sysinternal’s SigChecker64.exe along with Rohitab’s API Monitor (which, ironically is on a site not using HTTPS) on an unpatched system with our PoC, we can clearly see the vulnerability in action by the return values of these functions. Rohitab API Monitor – API Calls for Certificate Verification Industry-wide vulnerabilities seem to be gaining critical mass and increasing visibility even to non-technical users. And, for once, the “cute” name for the vulnerability showed up relatively late in the process. Visibility is critical to progress, and an understanding and healthy respect for the potential impact are key factors in whether businesses and individuals quickly apply patches and dramatically reduce the threat vector. This is even more essential with a bug that is so easy to exploit, and likely to have an immediate exploitation impact in the wild. McAfee Response McAfee aggressively developed updates across its entire product lines. Specific details can be found here. Sursa: https://www.mcafee.com/blogs/other-blogs/mcafee-labs/curveball-an-unimaginative-pun-but-a-devastating-bug/
-
Welcome to Bugcrowd University – Advanced Burp Suite Advanced! Adding onto the Introduction module found here, we explore further configurations, functionality, and some extensions that will enable you to better utilize Burp Suite.Content created by Bugcrowd Ambassador Jasmin Landry (jr0ch17). Follow him on Twitter @jr0ch17.
-
- 1
-
-
Lesser-known Tools for Android Application PenTesting 30 Dec 2019 » pentest Introduction In the past few months, I’ve been doing a lot of android application security assessments. Over time, I became familiar with the different tools, popular or not, that helped me in my assessments. In this post, I’ll list down these not-so-popular tools (in my opinion based on the different sources and blogs that I have read where these tools were not mentioned) that I’m using during my engagements. Note: There’s nothing fancy in this post. Just some tools that I found useful. Magisk While Magisk is a very popular framework and shouldn’t be considered as one of the “lesser-known” tools, it’s important that I mention it here since some of the tools included in this post are either a feature of Magisk or a module that you can install with Magisk. So if you don’t have Magisk on your testing device, make sure to install it now! Magisk Hide Magisk Hide is the first tool that will be discussed since it has saved me a lot of time when bypassing an application’s root detection mechanism. Magisk Hide is one of the features of Magisk, and bypassing root detection is as simple as toggling the switch ON. As an example, let’s try bypassing the root detection mechanism of the PS4 Remote Play app. When running this application on a rooted device, the following error shows up: To bypass the root detection of this application, open Magisk Manager, tap the menu icon ☰ (top left corner) and select Magisk Hide. Select the target application (“PS4 Remote Play” in this case) from the list of applications. Run the app again and we should now be able to launch the PS4 Remote Play without the error. If root detection was still not bypassed after adding the application in the Magiks Hide list, try hiding the Magisk Manager app itself. To do this, open Magisk Manager, tap the menu icon ☰ (top left corner) and select Setting. Then tap the Hide Magisk Manager option. This repackages the Magisk Manager app with a random package name and changes the app name from Magisk Manager to just Manager. Move Certificate Starting with Android Nougat (API Level 24), applications, by default, no longer trust user-added certificate for secure connections. This results in the following errors when capturing HTTPS traffic from an application running on Android Nougat and above. One method to resolve this issue is to add user-installed certificates to the system certificate store. This can be done manually or automatically using the Magisk module Move Certificate. Of course, I prefer the Magisk way! After installing the module, all user-installed certificate will be added automatically to the system certificate store. DisableFlagSecure Sensitive applications, such as mobile bankings, password managers, 2FA apps, etc., do not allow screenshots to be taken for security purposes. As an example, when taking a screenshot of the Aegis Authenticator 2FA app, the following error shows: When testing this kind of applications, taking evidence for findings which require showing the app or its screens is a bit of a hassle. Before, what I would do is to have another phone with me and take a photo of my testing device. This method annoys me because I have to make sure that the photo I’m taking is focused and clear. Plus, it doesn’t look great as evidence in a pentest report. Then I discovered the Xposed module DisableFlagSecure. This module disables the FLAG_SECURE window flag system-wide. FLAG_SECURE is is responsible for preventing screenshots to be taken. After installing DisableFlagSecure from Xposed and rebooting the device, screenshots can now be taken. Smali Patcher If you want to disable FLAG_SECURE “systemless-ly”, this can be done through Magisk with the help of Smali Patcher. After running SmaliPatcher.exe for the first time, it will download the necessary binaries and tools that it needs and will store them in the bin and tmp folders. Before clicking the ADB PATCH button, ensure the following are met: USB Debugging is enabled Device is attached to the PC USB Debugging connection is authorized Desired patch is ticked (Secure flag in this case) Once SmaliPatcher is done running, a zip file will be created on the working directory. Just flash this zip file through Magisk, reboot, and the patch (disabling FLAG_SECURE in this case) will be applied. SmaliPatcher also supports other patches as seen from the “Patch Options” section of the tool. It’s up to the reader to discover these patches. ADB Manager If you’re like me who has several testing devices but has only one cable available for connecting these devices into the computer, or you just hate cables, I found ADB Manager to be very useful. This application allows you to establish an ADB shell via Wi-Fi. Upon opening ADB Manager, just click the Start Network ADB button. To establish an ADB shell to the testing device (make sure USB Debugging is enabled), just type the following commands: adb connect <ip-addr-shown-in-ADB-Manager>:<port-shown-in-ADB-Manager> adb shell ProxyDroid When intercepting traffic from a device, you’ll observe a lot of traffic coming from applications other than the target application. Some of this traffic comes from background services running on the phone, and these unwanted data fill up the proxy history. This causes confusion as to whether a certain HTTP request came from the target application or not. To filter out these unwanted data, the simplest solution is to add the list of target hosts under the proxy’s target scope setting. However, I find this method to be repetitive since I have to do this for every engagement. Also, what if I just wanted to analyze a particular app and I don’t have an idea about the hosts the app is making requests to? Here comes ProxyDroid! Using its Individual Proxy option, you can select specific app or apps which you want to proxy. Under the Individual Proxy setting, just tick the app or apps you want to proxy, then switch ON ProxyDroid and everything should be good. pidcat Some applications write sensitive data, in plain-text format, in the system log. The system log can be viewed using Android’s Logcat utility. By simply running the command adb logcat, it prints out a lot of unnecessary data which makes the analysis very hard and confusing. To remove these unnecessary logs, we can filter Logcat’s output based on the target application using the following one-liner command: adb logcat | grep "$(adb shell ps | grep <target-app-package-name> | awk '{print $2}')" While the above command cleans up the messy Logcat’s default output, my preferred method is by using pidcat. To show log entries for processes from a specific application, just run this simple command: pidcat <target-app-package-name> Aside from the simplicity of running the command, you also have a nice colorful output. resize When typing long commands in an ADB shell, you’ll notice that the terminal size is limited. This is annoying especially when I’m viewing and analysing a file’s contents. Thankfully, BusyBox’s resize binary exists. Just run the command resize and you can now enjoy the full size of your terminal. If you’re testing on a physical device, you can install BusyBox via Playstore or do it “systemless-ly” via Magisk In an emulator which does not have the Google Playstore app, you can install BusyBox with the following commands: wget --no-parent --no-host-directories --cut-dirs 3 -r https://busybox.net/downloads/binaries/1.30.0-i686/ -P /tmp/busybox adb push /tmp/busybox /data/data/busybox adb shell "mount -o rw,remount /system && mv /data/data/busybox /system/bin/busybox && chmod 755 /system/bin/busybox/busybox && /system/bin/busybox/busybox --install /system/bin" Conclusion That’s all. Thanks for reading! Sursa: https://captmeelo.com/pentest/2019/12/30/lesser-known-tools-for-android-pentest.html
- 1 reply
-
- 2
-