Jump to content

Nytro

Administrators
  • Posts

    18659
  • Joined

  • Last visited

  • Days Won

    680

Everything posted by Nytro

  1. CVE-2022-33679 One day based on https://googleprojectzero.blogspot.com/2022/10/rc4-is-still-considered-harmful.html Usage usage: CVE-2022-33079.py [-h] [-ts] [-debug] [-dc-ip ip address] target serverName Example Sursa: https://github.com/Bdenneu/CVE-2022-33679
  2. Reverse engineering an EV charger Published date:11.11.2022 We decided to look into one of the most prevalent chargers on Norwegian roads Security testing Pentesting Techniques Blog Written by: By Harrison Sand Security Researcher, mnemonic By Andreas Claesson Senior Security Consultant TL;DR This blog post walks through our efforts reverse engineering the Zaptec Pro charger, an electric vehicle charger found in many parking lots and apartment buildings around Norway. The post shows how we went about testing the device, including some of our trials and errors during the process. By analyzing the device’s firmware, and compiling a custom bootloader, we were able to root the device and dig into how it works. Although we found that security appears to have been considered at multiple steps along the way in developing the Zaptec Pro charger, the blog post also presents some potential improvement Introduction Electric vehicles have become quite common over the past few years. Here in Norway, they make up over half of all new car sales. The chargers that support EVs have effectively become critical infrastructure that we rely on for everyday life. At the same time, the publicly available information about how they work is limited. Out of curiosity we decided to purchase the Zaptec Pro. This model was intended for larger, networked installations like parking lots and apartment buildings. The Zaptec Pro was among the most prevalent chargers on Norwegian roads at the time this post was written. Device overview The charger is a surprisingly powerful device. It runs a full-fledged Debian-based operating system with Wi-Fi, 4G LTE, Bluetooth, and power-line (PLC) network connectivity. It wouldn’t be too far off to think of it as a Raspberry Pi on steroids, with some 230V relays. As an end user, Zaptec is probably just the logo you see on the black box you plug your car into. Behind the scenes however, Zaptec has a whole cloud ecosystem designed to switch those relays on and off, as well as bill you for electricity consumption. To use a public charger a customer will normally have to download an app. These tend to be released by parking garage companies and charging network operators. A customer will enter their payment details in the app, and select a charger to use. At this point the app will make a request up to the cloud, and an integration between the app’s backend and Zaptec is used to start a new charging session. Zaptec uses Azure IoT Hub to communicate with and control their devices. More on how this works is discussed below. Teardown The charger has two PCBs, stacked on top of one another and linked via a 40-pin connector. The bottom PCB contains most of the power related components, while the upper PCB houses the “smart” components. Taking a deeper look at the upper PCB, there are a few different components of interest: In green there’s the piezo buzzer and RGB indicator LED In blue there’s the RFID components Yellow is the 4G modem and antenna Orange is a QCA7005 Qualcomm chipset, used for network communication over power lines (PLC) Purple is a PIC24E microcontroller And in red from left right we have a 512 MB NAND flash memory chip, an ARM Cortex-A7 based microcontroller, and 512 MB of RAM Debug interfaces We had previously learned that the device was running Linux by connecting it to a Wi-Fi network and running a port scan. It had a SSH listener on port 22. Basic brute-force password attacks didn’t work, so it was time to start looking around for debug interfaces on the PCB. The board had a fairly clean layout, and components were grouped into a few different sections. The most likely chip on the board to run Linux was the ARM processor. It was also next to flash storage and RAM chips, which seemed logical enough. Since it was an ARM processor, we were hoping to see JTAG/SWD interfaces, and/or UART serial ports. The JTAG/SWD ports, if left enabled, should theoretically allow us to dump firmware and modify running code. The processor physically has all these pins, and can be found by looking at the datasheet. However, soldering to an in-circuit BGA was out of the question. So, we’re more or less at the mercy of what the designer’s exposed on the PCB. Though it’s not hard to notice an unpopulated 3-pin header to the left of the NAND flash. The header had three pins, one being ground, so it was probably: Nothing (disabled in production firmware) ARM’s Serial Wire Debug (SWD) UART We soldered a header on and hooked up logic analyzer to the pins, and discovered a UART serial interface. The console log provided useful information about the device, but unfortunately didn’t give us a shell. The bootloader had also been locked down and didn’t allow for interrupting the boot process, which prevented us from playing around in the U-Boot environment. It was at this point we realized that little silver rectangle below the ARM processor was just about the right size to be a microSD card slot. We put a SD card in the slot then reboot the charger. It didn’t boot. Either we made a brick, or the charger was trying to use the SD card as a boot device and we had nothing for it to boot. Removing the SD card and booting again validated our suspicions. Investigating the boot process We tried flashing the SD card with various images for development boards based on the same processor, with varying success. A few images managed to start but inevitably would hang in U-Boot before loading the Linux kernel. This issue was ultimately down to the fact that there were hardware differences between the development board and the Zaptec PCB. This resulted in the images being incompatible. Zaptec likely used different ram, or connected the various components to different pins. The way U-Boot and Linux know about the hardware configuration for a device is through a devicetree. This is essentially a file that describes hardware like RAM and flash storage, so that the OS knows how to interact with them. Normally, a designer will sit with board schematics to create a devicetree. Since we didn’t have access to Zaptec’s schematics we were left with finding another solution. Dumping the NAND flash Access to a device’s firmware is always nice to have. Not only would it provide further information about how the device worked, but if we got access to the Zaptec devicetree, it could allow us to compile our own compatible bootloader or OS. It wasn’t the cleanest job in the world, but we were able to successfully desolder the TSOP48 NAND chip and dump its contents using a TL866II Plus programmer. Firmware analysis The binary file produced by the programmer is an exact byte-for-byte copy of the NAND flash. This presents some challenges, as the partitions we want to analyze are mixed in with other data such as error correction bits and space reserved for wear leveling. This is known as out-of-band (OOB) data and is introduced by the NAND controller integrated into the ARM processor. Damien Cauquil held a presentation at HITB Amsterdam in 2019 that went into detail about how this process works on I.MX based processors. Lucky for us, he released a tool that removes the OOB data. It produces a binary file similar to what U-Boot or Linux would see when interacting with the NAND flash. After poking around in the firmware for a while, we were able to carve out Zaptec’s devicetree binary from the boot partition. Compiling a custom bootloader We considered building Linux to boot from the SD card, but eventually decided to just compile U-Boot. If we resoldered the NAND flash, and had the ability to enter a U-Boot environment, we could control the boot arguments passed to the Linux kernel. This would allow us to enter single user mode, which essentially just drops you into a root shell without prompting for a password. We included the devicetree binary as part of the U-Boot build process, and flashed the bootloader to an SD card. Once in our custom U-Boot environment we set some environment variables that told U-Boot to boot from the NAND flash and enter single user mode. Once in single user mode we set a new root password, and rebooted without the SD card. Now we could connect with SSH over WiFi using our new root password! How things work Now with root access to a running device, we wanted to investigate a few different aspects of how the charger worked. The Bluetooth PIN code The device comes set from the factory with a four-digit PIN code. It’s printed on the box and can’t be changed. It is used to manage a few settings, like how the charger connects to the internet. Two questions we wanted to answer were: How was the PIN code generated? Many IoT devices generate security codes from easily guessable identifiers, like a serial number or MAC address. If this was true in this case, maybe we could manage arbitrary devices. What can you do with access to a devices PIN code? Can you get free charging or start a botnet of bitcoin miners? From what we can tell it looks like the PIN code is set from a server in the factory when the device is initially programmed. Looking through the very first logs that appear on the device hints at this. Since the PIN code (and Azure access token) is provisioned from a server in the factory, we’re not getting access to the code that generates these secrets anytime soon. Ideally the PIN should be a truly random number, and not based on an identifier or some wonky cryptography. That still begs the question of what can you actually do if you know a PIN code. Before purchasing the charger, we decompiled the Android application and poked around at the Bluetooth Low Energy (BLE) functionality. In the Android application’s list of BLE characteristics the “RunCommand” was certainly interesting. Digging a bit further in the Android application’s code revealed commands to start and stop charging via Bluetooth. Assuming you had the PIN code, maybe you could get free charging by issuing these commands over Bluetooth? Now with access to the charger’s code we could see what it actually does. The BLE interface was written in Python, which made things easy to look through. So essentially what happens here is the value of whatever you send to the BLE characteristic gets passed to the smart_service.RunCommand() function. The smart service is another process running on the charger written in .NET mono. Python communicates the smart service through a D-Bus messaging interface. Let’s go see what the RunCommand function can do. The .NET code only seems to implement the Reboot and UpdateFirmware commands. The StartCharging and StopChargingFinal commands look to be functions that were partially implemented in the Android application, but never implemented on the charger. No free charging via Bluetooth. So apart from reconfiguring the device, and potentially causing it to disconnect from the network and stop working, what you can do via the Bluetooth interface seems to limited. It is worth noting that due to the nature of BLE, it would be possible to sniff the PIN code when the device is configured by a technician, but this would require somebody to be listening at that exact moment in time. Zaptec also implemented PIN brute force protection in the Python code. If you enter an incorrect PIN code too many times the Bluetooth interface switches off. The amount of time the Bluetooth interface remains off increases with each incorrect PIN attempt. So, you could brute force the PIN but it would take a long time. And at face value access to the BLE interface for an attacker doesn’t seem to be terribly interesting. The SSH listener One of the first things we wanted to do after connecting the device to a network was figure out if there was a hardcoded root password. There wasn’t. Zaptec placed two public SSH keys on the charger, but the shadow file was empty until we configured a password ourselves in single user mode. This configuration allows Zaptec to login via SSH with the correct key pair, but effectively disables password authentication for both the SSH listener and the UART console. The Cloud connectivity The final area we wanted to investigate was the cloud connectivity. We took a few packet captures early on and knew that the charger was talking to the Azure IoT Hub, but couldn’t see what it was sending because the traffic was encrypted. With root access we were able to install our own root certificate and perform TLS decryption by proxying traffic through mitmproxy. Analyzing a decrypted PCAP allowed us to verify that the charger was communicating with the Azure IoT Hub using Shared Access Signatures, an authentication mechanism that derives credentials based on a secret that was provisioned at the factory. Looking at a few messages published to the IoT Hub revealed the types of data it sends back to Zaptec. A quick look revealed a few things like Linux kernel logs and electricity consumption data. We also took at a look at the .NET code to see everything it was capable of doing via the IoT Hub. We weren’t able to easily test if this functionality worked, but we did find what appears to be Zaptec’s means of remotely debugging their devices. The first is a function called RunRemoteCommand. This passes the contents of a message received from the cloud directly to Process.Start. A second interesting function called StartRemoteTunnel appears to allow Zaptec to create a reverse shell back to an SSH listener on the internet. Conclusion All in all, we didn’t find any critical security issues during our investigation. Though there is probably room for improvement in a few areas. For example, we would have had a much harder time getting root access to the device if they had used signed firmware, or encrypted the NAND flash. Both of these features are supported by the ARM processor already built into the charger. Security appears to have been considered at multiple steps along the way, and was better than we expected going into the project. Get in touch Harrison Edward Sand Security Researcher Sursa: https://www.mnemonic.io/resources/blog/reverse-engineering-an-ev-charger/
      • 2
      • Thanks
      • Upvote
  3. wa-tunnel - HTTP Tunneling through Whatsapp This is a Baileys based piece of code that lets you tunnel TCP data through two Whatsapp accounts. This can be usable in different situations, for example network carriers that give unlimited whatsapp data or airplanes where you also get unlimited social network data. It's using Baileys since it's a WS based multi-device whatsapp library and therefore could be used in android in the future, using Termux for example. The idea is to use it with a proxy setup on the server like this: [Client (restricted access) -> Whatsapp -> Server -> Proxy -> Internet] Apologizes in advance since Javascript it's not one of my primary coding languages 😕 Use only for educational purpose. Why? I got the idea While travelling through South America network data on carriers is usually restricted to not many GBs but WhatsApp is usually unlimited, I tried to create this library since I didn't find any usable at the date. Setup You must have access to two Whatsapp accounts, one for the server and one for the client. You can forward a local port or use an external proxy. Server side Clone the repository on your server and install node dependencies. cd path/to/wa-tunnel npm install Then you can start the server with the following command where port is the proxy port and host is the proxy host you want to forward. And number is the client WhatsApp number with the country code alltogether and without +. node server.js host port number You can use a local proxy server like follows: node server.js localhost 3128 12345678901 Or you can use a normal proxy server like follows: node server.js 192.168.0.1 3128 12345678901 Client Side Clone the repository on your server and install node dependencies. cd path/to/wa-tunnel npm install Then you can start the server with the following command where port is the local port where you will connect and number is the server WhatsApp number with the country code alltogether and without +. node client.js port number For example node client.js 8080 1234567890 Usage The first time you open the script Baileys will ask you to scan the QR code with the whatsapp app, after that the session is saved for later usage. It may crash, that's normal after that just restart the script and you will have your client/server ready! It splits network packages to not get timed out by WhatsApp, at the moment it's hardcoded in wasocket.js, by default it's limited at 20k characters per message, I have done multiple tests and anything below that may get you banned for sending too many messages and any above 80k may timeout. Once you have both client and server ready you can test using curl and see the magic happen. curl -v -x proxyHost:proxyPort https://httpbin.org/ip With the example commands would be: curl -v -x localhost:8080 https://httpbin.org/ip It has been tested also with a normal browser like Firefox, it's slow but can be used. You can also forward other protocol ports like SSH by setting up the server like this: node server.js localhost 22 12345678901 And then connect to the server by using in the client: ssh root@localhost -p 8080 Disclaimer Using this library may get your WhatsApp account banned, use with a temporary number or at your own risk. TO-DO Make an Android script to install node dependencies on termux When Baileys supports calls, implement package sending through calls Implement sending files for big data packages to reduce messages and maybe improve speed Documentation License MIT Sursa: https://github.com/aleixrodriala/wa-tunnel
      • 3
      • Upvote
  4. The complete GraphQL Security Guide: Fixing the 13 most common GraphQL Vulnerabilities to make your API production ready Jens Neuse 2021-09-01·26min read WunderGraph Cloud Early Access Before we get into the blog post. WunderGraph Cloud is being released very soon. We’re looking for Alpha and Beta testers for WunderGraph Cloud. It's 2021, GraphQL is on its rise to become a big player in the API Ecosystem . That's perfect timing to talk about how to make your GraphQL APIs secure and ready for production. So here's my thesis: GraphQL is inherently insecure. I'll prove this throughout the article and propose solutions. One of the solutions will require some radical change in the way we're thinking about GraphQL, but it will come with a lot of benefits that go way beyond just security. If you pick a random GraphQL framework and run it with default settings in production, disaster is waiting to happen. The 13 most common GraphQL Vulnerabilities 1. Parsing a GraphQL Operation vs. parsing a URL Why? Why is GraphQL so much more vulnerable than e.g. REST? Let's compare a URL against a GraphQL Operation. According to Wikipedia, the concept of the URL was first published in 1994, that's 27 years ago. If we search the same source for the birth of GraphQL, we can see, it's Sep 2014, around 7 years old. This gives parsing URLs an advantage of 20 years over parsing GraphQL Operations. Quite the headstart! Next, let's have a look at the antlr grammar for both. The grammar for parsing a URL is 86 lines. The grammar for parsing a GraphQL document is 325 lines. So, it's fair to say that the GraphQL language is around 4 times more complex than the one defining a URL. If we factor in both variables, it's obvious that there must be a lot more experience and expertise in parsing URLs than parsing GraphQL operations. But why is this even a problem? Recently, a friend of mine analyzed some popular libraries to see how fast they are in parsing GraphQL queries. It made me happy to see that my own library was performing quite well . At the same time, I was surprised that some libraries didn't accept the test Operations while other were able to parse them. What does this mean for us? The person who performed the benchmarks hand-picked a number of GraphQL libraries and ran a few benchmarks. This was enough to find some bugs. What if we picked all GraphQL libraries and frameworks and test them against numerous GraphQL Operations? Keep in mind that we're still talking about simply parsing the Operations. What if we add building a valid AST into the equation? What if we add executing the Operations as well? We almost forgot about validating Operations, a topic in itself. A few years ago, there was a small group of people who started an amazing open source project: CATS The GraphQL Compatibility Acceptance Test . It's quite a mouthful, but the idea is brilliant. The idea was to build a tool so that different GraphQL implementations can prove that they work as intended. Unfortunately, the project's last commit is from 2018. Alright, parsing a URL seems simple and well understood. Parsing GraphQL Operations is a nightmare. You should not trust any GraphQL library without heavy testing, including fuzzing. We're all humans. Building a GraphQL library is complex. I'm the owner of an implementation written in Go . It's no easy, it's a lot of code. A lot of code means, a lot of potential for bugs. And don't get me wrong, this is not about hand-written parsers vs. generated parsers from a grammar. Turning a string into an AST is just one small piece of the puzzle. There's plenty of opportunities left for bugs. 2. Normalizing GraphQL Queries can potentially leak fields You don't have to normalize a URL. If you can parse it in your language of choice, it's valid, otherwise it's not. A different story with GraphQL. Here's an example: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 { foo foo: foo ... { foo ... { foo } } ... @include(if: true){ foo } ...@skip(if: false){ foo } } A lot of foo! Let's normalize the Query. 1 {foo} That's a lot less foo, nice! I could have made it more complicated with more fragments, nesting, etc... What's the point? How can we prove that all libraries and frameworks normalize the Query correctly? What happens if something goes wrong here? It might give an attacker an opportunity to ask for fields which he/she is not allowed to use. Maybe there's a hidden field and by wrapping it with a weird inline fragment @skip combo, we're able to query it. As long as we're not able to prove that it's impossible, I'd consider it's possible, prove me wrong! To summarize: No Normalization for URLs. More nightmares for GraphQL. 3. GraphQL Operation Validation, an absolute minefield I've implemented GraphQL Operation validation myself. One of the unit test files is more than 1000 LOC . What I've done is, I copied the complete structure from the GraphQL Specification one by one and turned it into unit tests. There are various ways how this could go wrong. Copy & paste Errors, general misunderstanding, implementing the logic to make the tests green while the logic is still wrong. There are a lot of pitfalls you could fall into. Other libraries and frameworks are probably taking different approaches. You could also copy the tests from the reference implementation , but that's also no guarantee that the logic is 100% correct. Again, as we don't have a project like CATS anymore, we're not really able to prove if our implementations are correct. I hope, everybody is doing their best to get it right. Until then, don't trust any GraphQL validation library if you haven't tested it yourself. Use many Operations for testing. Summary: If a standard library can parse your URL, it's valid. If your library of choice validates a GraphQL Operation, you should still be cautious, especially when you're dealing with PII (personally identifiable information). At this point, we've probably passed a few bugs already by passing our request through the parser, normalization and validation. The real trouble is still ahead of us, executing the Operation. When executing a GraphQL Operation, it's not only the frameworks' responsibility to do the right thing. At this point, it's also a great chance for the framework user to mess up. This has to do with the way GraphQL is designed. A GraphQL Operation can walk from node to node, wherever it wants, it you don't do anything about it. So, the range of possible attacks goes from simple denial of service attacks to more sophisticated approaches that return data which should not be returned. For that reason, we'll give this section a bit more structure. 4. GraphQL Denial of Service Attacks If you want to rate-limit a REST API user, all you have to do is store their IP in an in memory story, e.g. Redis, and rate limit them with you algorithm of choice, e.g. a sophisticated window rate limiter. Each request counts as one request, this sounds stupid but matters in the context of GraphQL. With GraphQL on the other hand, you cannot apply the same pattern. One single Operation is enough to bring the GraphQL Server to a halt. Here are a few examples of how to build a denial of service attack with GraphQL: Moving back and forth, forever. 1 2 3 4 5 6 7 8 9 10 11 { foo { bar { foo { bar { # repeat forever } } } } } Simply ask for a lot of foos: 1 2 3 4 5 6 7 8 9 { a: foo b: foo c: foo # ... aa: foo # ... zzzzzzzzzzzz: foo } How about exploiting N+1 problems? 1 2 3 4 5 6 7 8 9 { arrayField { # returns 100 nodes moreArray { # returns 100 nodes moar { # returns 100 nodes storyGoesOn # returns 100 nodes ... } } } } Each layer of nesting asks for more nested data, hence exponential growth of execution complexity. A few things you should consider: Usually, GraphQL operations come in the form of a JSON over an HTTP POST request. This JSON could look like this: 1 2 3 4 5 6 7 { "query": "query Foo($bar: String!) {foo(bar:$bar){bar}}", "operationName": "Foo", "variables": { "bar": "baz" } } The first thing you should do is to limit the amount of JSON bytes you're accepting. How large can your larges Operations be? A few Kilobytes? Megabytes? Next, when parsing the Operation, how many Nodes are too many Nodes? Do you accept any amount of Nodes in a Query? If you have analytics running on your system, maybe take the larges Query, add a margin on top and set the limit there? Talking about the maximum number of Nodes when parsing an Operation. Does your framework of choice actually allow you to limit the number of Nodes it'll read? Next, let's talk about the options you have when the Operation is parsed. You can calculate the "complexity" of the Operation. You could "walk" through the AST and apply some sort of algorithm to detect the complexity of the Operation. One way to define complexity is for example the nesting. Here's a Query with nesting of 1: 1 {foo} This Query has nesting of 2: 1 {foo{bar}} This algorithm is a good start. However, it has some downsides. Nesting alone is not a good indicator of complexity. To better understand the complexity, you'd have to look at the possible number of nodes, a field can return. This is similar to EXPLAIN ANALYZE in SQL. It gives you some estimates on what the Query Planner thinks, how the Query will be executed. Keep in mind that these estimations can go completely wrong. So, estimating is not bad, but you should also look at the real number of returned nodes during execution. Companies with public GraphQL APIs, like e.g. GitHub, have implemented quite sophisticated rate limiting algorithms. They take into account the number of nodes returned by each field and give you some limits based on their calculations. Here's an example Query from their explanation: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 query { viewer { repositories(first: 50) { edges { repository:node { name issues(first: 10) { totalCount edges { node { title bodyHTML } } } } } } } } There's one important thing we can learn from them in terms of GraphQL Schema design. If you have a field that returns a list, make sure there is a mandatory argument to limit the number of items returned, e.g. first, last, skip, etc... Only then, it's actually possible to calculate the complexity before executing the Operation. Additionally, you'd also want to think about the user experience of your API. It's going to be a poor user experience if GraphQL Operations randomly fail because there's too much data coming back from a list field for some instances. At the end of the post, we'll pick up this topic again and talk about an even better approach, an approach that works well for both the API provider and the consumer. 5. GraphQL SQL Injection Vulnerability This one should be quite known, but it should still be part of the list. Let's have a look at a simple resolver using graphql-js: 1 2 3 4 5 6 7 Query: { human(obj, args, context, info) { return context.db.loadHumanByID(args.id).then( userData => new Human(userData) ) } } A Query for this resolver might look like this: 1 2 3 4 5 6 query Human { human(id: "1"){ id name } } In case of a badly written implementation of db.loadHumanByID, the SQL statement could look like this: 1 2 3 4 export const loadHumanByID = (id) => { const stmt = `SELECT * FROM humans where id = ${id};`; return db.query(stmt); } In case of the "happy" path, the SQL statement will be rendered like this: 1 SELECT * FROM humans where id = 1; Now, let's try a simple attack: 1 2 3 4 5 6 query Human { human(id: "1 OR 1=1"){ id name } } In case of our attack, the SQL statement looks slightly different: 1 SELECT * FROM humans where id = 1 OR 1=1; As 1=1 is always true, this would return all users. You might have noticed that the function can only return a single user, not a list of users, but for illustration purposes, I think it's clear that we have to deal with the issue. What can we do about this? Be liberal in what you accept, and conservative in what you send. [Postels Law] The solution to the problem is not really GraphQL-specific. You should always validate the inputs. For database access, use prepared statements or an ORM that abstracts away the database layer so that you're not able to inject arbitrary logic into the statement by design. Either way, don't trust user inputs. It's not enough to check if it's a string. 6. GraphQL Authentication Vulnerabilities Another attack vector is incomplete authentication logic. There might be different Query-Paths to traverse to the same object, you have to make sure that every path is covered. Here's an example schema to illustrate the problem: 1 2 3 4 5 6 7 8 9 type Query { me: User! } type User { id: ID! name: String! friends: [User!]! } In the resolver for the field me, you extract the user ID from the context object and resolve the user. So far, there's no issue with this schema. Later on, the product owner wants a new feature, so a new team member adds a new field to the Query type: 1 2 3 4 5 6 7 8 9 10 type Query { me: User! userByID(id: ID!):User } type User { id: ID! name: String! friends: [User!]! } With this change, you have to make sure that the field userByID is also protected by an authentication middleware. It might sound trivial but are you 100% sure that your GraphQL doesn't contain a single access path that is unprotected? We'll pick this item up at the end of the post because there's a simple way to fix the issue. 7. GraphQL Authorization traversal attack Vulnerability Traversal attacks are very simple to exploit while hard to spot. Looking at the previous example, let's say you should only be allowed to view your friends id and name; A simple Query to get the current user looks like this: 1 2 3 4 5 6 7 8 9 10 { me { id name friends { id name } } } As we inject the user ID into the me resolver ourselves, there's not much an attacker can do. What about this Query? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 { me { id name friends { id name friends { id name } } } } With this Query, we're loading all friends and their friends. How can we prevent the user from "traversing" this path? The question in this case is, do you protect the edge (friends) or the node (User)? At first glance, it looks like protecting the edge is the right way to do it. So, whenever we enter the field "friends", we check if the parent object (User) is the currently authenticated user. This would work for the Query above, but it has a few drawbacks. One of which is, if you only protect edges, you'd have to protect all of them. Here's another Query that would not be protected by this approach, but it's not the only issue. 1 2 3 4 5 6 { userByID(id: "7") { id name } } If you haven't protected the userByID field, we could simply guess user IDs and collect their data. Heading over to the next section, you'll see why protecting the edges is not a good idea. 8. Relay Global Object Identification Vulnerability Your GraphQL Server Framework might implement the Relay Global Object Identification specification. This spec is an extension to your GraphQL schema to make it compatible with the Relay client, the client developed and used by Facebook. What's the problem with this spec? Let's have a closer look at what it allows us to do: 1 2 3 4 5 6 7 { node(id: "4") { id ... on User { name } } The Relay spec defines that each Node in a Graph must be accessible through a globally unique identifier. Usually, this ID is the base64 encoded combination of the __typename and the id fields of a node. With the node returned, you're able to use fragments to ask for specific node fields. This means, even if your Server is completely secure, by enabling the Relay Extension, you're opening up another attack vector. At this point, it should be clear that protecting the edges is a cat and mouse game which is not in favor of you. A better solution to solve the problem is by protecting the node itself. So, whenever we enter the resolver for the type User, we should check if the currently authenticated user is allowed to request the fields. As you can see, you have to make decisions very early on when designing your GraphQL Schema as well as the Database Schema to be able to protect nodes properly. Whenever you enter a node, you must be able to answer the question if the currently logged in user is allowed to see a field or not. So, the question arises if this logic should really sit in the resolver. If you ask the creators of GraphQL, their answer would be "no". As they've already solved the problem in a layer below the resolvers, the data access layer or their "Entity (Ent) Framework", they didn't address the issue with GraphQL. This is also the reason why authorization is completely missing from GraphQL. That being said, solving the problem a layer below it not the only valid solution. If done right, it can be completely fine to solve the problem from within the resolvers. Before we move on, you should have a look at the excellent entgo framework and its architecture. Even if you're not going to use Golang to build your API layer, you can see how much thought and experience went into the design of the framework. Instead of scattering authorization logic across your resolvers, you're able to define policies at the data layer and there's no way to circumvent them. The access policy is part of the data model. You don't have to use a framework like entgo, but keep in mind that you'd then have to solve this complex problem on your own. Again, we'll revisit this vulnerability later to find a much simpler solution. 9. GraphQL Gateway / Proxying Vulnerability A lot of GraphQL servers are also API Gateways or Proxies to other APIs. Injecting GraphQL arguments into sub-requests is another possible threat we have to deal with. Let's recall the schema from above: 1 2 3 4 5 6 7 8 9 10 type Query { me: User! userByID(id: ID!):User } type User { id: ID! name: String! friends: [User!]! } Let's imagine this Schema is implemented using a REST API with the GraphQL API as an API Gateway in front. The resolver for the userByID field could look like this: 1 2 3 4 export const userByID = async (id: string) => { let results = await axios.get(`http://my.rest.api/user/${id}`); return results.data; } Now, let's not fetch the user but two of their friends! Here's the Query (totally valid): 1 2 3 4 5 6 7 8 9 10 { firstFriend: userByID(id: "7/friends/1"){ id name } secondFriend: userByID(id: "7/friends/2"){ id name } } This results in the following GET requests: 1 2 GET http://my.rest.api/user/7/friends/1 GET http://my.rest.api/user/7/friends/2 Why is this possible? The ID Scalar should be serialized as a string . While "7" is a valid string, "7/friends/1" is also. To solve the problem, you have to validate the input. As the GraphQL type system is only validating if the input is a number or a string, you need to go one step further. If you're accepting strings as input, e.g. because you're using a UUID or GUID, you have to make sure you've validated them before usage. How can we fix it? Again, we need to validate the inputs. WunderGraph offers you a simple way to configure JSON Schema validation for all inputs. This is possible, because WunderGraph is keeping your Operations entirely on the server. But we'll come to that later. Anybody else should make sure to validate any input before using it from your resolvers. 10. GraphQL Introspection Vulnerability GraphQL Introspection is amazing! It's the ability of the GraphQL to tell clients everything about the GraphQL Schema. Tools like GraphiQL and GraphQL Playground use the introspection Query to then be able to give the user autocompletion functionalities. Without Introspection and the Schema, tools like these wouldn't exist. At the same time, introspection also has a few downsides. The GraphQL schema can contain sensitive information. There's a possibility that your GraphQL schema is leaking internal information or fields that are only used internally. Maybe one of your teams is working on a new MVP which is not yet launched. Your competitors might be scraping your GraphQL API using the introspection Query. Whenever there's a change in the schema, they could immediately see this using a diff. What can we do about this? Most guides advise you to disable the Introspection Query in Production. That is, you'll allow it during development but disallow introspection Queries when deploying to production. However, due to the friendliness of some GraphQL framework implementations, including the graphql-js reference implementation, disabling introspection doesn't really solve the issue . Keep in mind that every implementation depending on the graphql-js reference is also affected by this. So, if disabling introspection doesn't help, what else can we do about it? If your API is only used by your internal staff, you can the execution of introspection Queries with an authentication middleware. This way, you would add a layer of authentication in front of the GraphQL execution. Obviously, this only works for APIs that always require authentication because otherwise users would not be able to make a single request. If you're building an app that can be used by users without authentication, the proposed solution doesn't work. To sum up, by disabling introspection at runtime, you're making it a bit more complicated to introspect the schema, but with most frameworks it's still possible. The next vulnerability will also take advantage of this issue. The ultimate catch-all solution will be presented at the end. 11. Generated GraphQL APIs Vulnerability There are a number of services and tools like e.g. Postgraphile or Hasura that Generate APIs from a database schema. The promise is simple, point the tool at the Database, and you'll get a fully functional GraphQL Server. As we've previously discussed, it's not easy and sometimes impossible (so far) to fully disable introspection at runtime. Generated GraphQL APIs usually follow a common structure to generate their CRUD resolvers. This means, it's quite easy to spot if we're dealing with a custom-made use-case driven API or a generated API. Why is this an issue? If we're not able to disable introspection we're leaking information of our complete database schema to the public. It's already a questionable approach if you want to have tight coupling between client and server, which is the case if we're generating the API from the database schema. That being said, in terms of security, this means we're exposing our whole Database schema to the public. By exposing your Database schema to the public, you're giving attackers a lot of information to find vulnerabilities, try SQL injections, etc... I know it's a repetitive schema but we're also going to address this issue at the end. 12. GraphQL CSRF Vulnerability This issue is not directly a GraphQL vulnerability but a general threat for HTTP-based applications with Cookie- or Session-based authentication mechanisms. If you're using frameworks like NextJS, Cookie-based auth is quite common (and convenient) so it's worth covering as well. Imagine, we're building an app that allows users to send money to other users. A mutation to send money could look like this: 1 2 3 4 5 mutation SendMoney { sendMoney(to: "123" amount: 10 currency: EURO){ success } } What can go wrong? If we're building a Single Page Application (SPA) on app.example.com with an API on a different domain (api.example.com), the first thing you'd have to do to make this setup working is to configure CORS. Make sure to only allow your SPA domain and don't use wildcards! The next thing that could go wrong is to properly configure the SameSite properties for the API domain, the one setting the Cookies. You'd want to use SameSite lax or strict, depending on the user experience. For Queries, it could make sense to use lax, which means we're able to use Queries from a trusted domain, e.g. a different subdomain. For Mutations, strict would definitely be the best option as we only want to accept those from the origin. SameSite none would allow any website to make requests to our API domain, independent of their origin. If you combine a bad CORS configuration with the wrong SameSite Cookie settings, you're in trouble. Finally, attackers could find a way to construct a link to our website that leads already authenticated users to make a transaction that they don't really want to do. To protect against this issue, you should add a CSRF middleware around mutations. WunderGraph does this out of the box. For each mutation endpoint, we configure a CSRF middleware. Additionally, we generate our clients in a way so that they automatically handle CSRF. As a developer using WunderGraph, you don't have to do anything. 13. GraphQL Excessive Errors Vulnerability This is another common issue with GraphQL APIs. GraphQL has a nice and expressive way of returning errors . However, some frameworks are by default just a bit too informative. Here's an example of a response from an API that is automatically generated on top of a database: 1 2 3 4 5 6 7 8 9 10 11 { "errors": [ { "extensions": { "path": "$.selectionSet.insert_user.args.objects", "code": "constraint-violation" }, "message": "Uniqueness violation. duplicate key value violates unique constraint \"auser_authprovider_id_key\"" } ] } This error message is quite expressive, it seems like it's coming from a SQL database and it's about a violation of a unique key constraint. While helpful to the developer of the application, it's actually giving way too much information to the API consumer. This message could be written to the logs if any. It seems like an app user is trying to create content with an ID that already existed. In a properly designed GraphQL API, this actually doesn't have to be an error at all. A better way to design this API would be to return a union that covers all possible cases, like e.g. success, conflict, etc... But that's just a general problem with generated APIs. In any case, if generated or not, There should always be a middleware at the very top of your HTTP Server that catches verbose errors like this and removes them from the reponse. If possible, don't just use the generic "errors" response object. Instead, make use of the expressive type system and define types for all possible outcomes of an operation. REST APIs have a rich system of HTTP status codes to indicate the result of an operation. GraphQL allows you to use Interface and Union type definitions so that API consumers can easily handle API responses. It's very hard to programmatically analyze an error message. It's just a string which could change any time. By creating Union and Interface types for responses, you can cover all outcomes of an operation explicitly. An API consumer is then able to switch case over the __typename field and properly handle the "known error". Summary & Vulnerability Checklist Another long blog post comes to an end. Let's recap! We've covered 13 of the most common GraphQL vulnerabilities. Here's a Checklist if you want to go through all of them. Parsing Vulnerabilities Normalization Issues Operation Validation Errors Denial of Service Attacks GraphQL SQL Injections Authentication Vulnerabilities GraphQL Authorization Traversal Attacks Relay Global Object Identification Vulnerability GraphQL Gateway / Proxying Vulnerability GraphQL Introspection Vulnerability Generated GraphQL APIs Vulnerability GraphQL CSRF Vulnerability GraphQL Excessive Errors Vulnerability That's a lot of issues to solve before going to production. Please don't take this lightly. If you look at HackerOne , you can see the issue is real. So, we want to get the benefits of GraphQL, but going through this whole list is just way too much work. Is there a better way of doing GraphQL? Is there a way of doing GraphQL differently so that we're not affected by all the issues. The answer to this question is Yes! All you have to do is to adjust your view on GraphQL. Solving the 13 most common GraphQL Vulnerabilities for private APIs Most of us are using GraphQL APIs internally. This means, the developers who use the GraphQL API are in the same organization as the people who provide the API. Additionally, I'm assuming that we're not changing our GraphQL Operations at runtime. All this boils down to the root cause of the problem. Allowing API clients to send GraphQL Operations over HTTP is the root cause of all evil. All this is completely avoidable, adds no value and only creates harm. It's absolutely fine to allow developers within a secured environment to send arbitrary GraphQL Operations. However, most apps don't change their GraphQL Operations in production, so why allow it at all? Let's have a look at the Architecture you're most familiar with. A GraphQL client talks GraphQL to a GraphQL Server. Now, let's make a small change to the architecture to fix all 13 problems. Instead of talking GraphQL between Client and Server, we're talking RPC, JSON-RPC more specifically. The Server then handles Authentication, Authorization, Caching, etc... for us and forwards Requests to the origin servers. We haven't invented this though. It's not something new. Companies like Facebook, Medium, Twitter, and others are doing it. What we've done is not just make it possible and fix the problems listed above. We've created an easy-to-use developer experience. We've assembled everything in one place. You don't have to install numerous dependencies. Let's break down the solution a bit more, so you can fully understand how we're able to solve all the vulnerabilities. Solving Vulnerabilities related to Parsing, Normalizing and Validating GraphQL Operations The most secure code is the code that doesn't have to run at all. Every code has bugs. To fix bugs, we have to write more code, which means, we're introducing even more bugs. So, how can we replace GraphQL with RPC over the wire? During development, the developers define all Operations that are required for the Application. At the time when the app is ready to be deployed, we'll parse, normalize and validate all Operations. We'll then generate JSON-RPC Endpoints for each Operation. As mentioned, we've normalized the Operations. This allows us to treat all inputs (the variables) like a JSON object. We can then parse the variable types and get a JSON Schema for the input. Additionally, we can parse the response schema of the GraphQL Query. This gives us a second JSON Schema. These two will be quite handy later. By doing all this, it's happening automatically, we'll get two things: A number of JSON-RPC Endpoints JSON Schema definitions for the inputs and response objects of all Endpoints By doing this at "deployment time", we don't have to do it during the execution again. We're able to "pre-compile" an execution tree. All that's left at runtime is to inject the variables and execute the tree. We've borrowed this idea from SQL Database Systems, it's quite similar to "Prepared Statements". Ok, this means we've solved three problems. Unfortunately, we've also introduced a new problem! There are no easy to use clients that could make use of our JSON-RPC API. Luckily, we've extracted two JSON-Schemas per Endpoint. If we feed those into a code-generator, we're able to generate fully type-safe clients in any language. These clients are not only very small, but also super efficient as they don't have to do much. So, in the end, we've not only solved three problems but also made our application more performant. As another side effect, you're also able to generate forms from these JSON Schema definitions. It's fully integrated into WunderGraph. Solving GraphQL Denial of Service Attacks Most GraphQL DOS vulnerabilities come from the fact that attackers can easily create complex nested Queries that ask for too much data. As we've discussed above, we've just replaced GraphQL with RPC. This means, no dynamic GraphQL Operations. For each operation, we're able to configure a specific rate limit or quote. We can then rate limit our users easily, just like we did it with REST APIs. Solving GraphQL SQL Injections We've extracted a JSON Schema for each RPC Endpoint. This JSON Schema can be adjusted to your desire to allow you to validate all inputs. Have a look at our documentation and how you can use the @jsonSchema directive to configure the JSON Schema for each Operation. Here's an example of how to apply a Regex pattern for a message input: 1 2 3 4 5 6 7 8 9 10 mutation ( $message: String! @jsonSchema( pattern: "^[a-zA-Z 0-9]+$" ) ){ createPost(message: $message){ id message } } Solving GraphQL Authentication Vulnerabilities The issue with Authentication was that you have to make sure that every possible Query path is covered by an authentication Middleware. Introducing the RPC layer locks down all possible Query paths by default. Additionally, you're able to lock down all Operations behind an authentication middleware by default. If you want to expose an Operation to the public, you have to do so explicitly. Solving GraphQL Authorization Traversal Attack Vulnerabilities From an attackers' perspective, traversal attacks are only possible if there's something they can "traverse". By protecting the GraphQL layer with the RPC layer, this feature is removed from the public facing API. The biggest threat is now the developer itself, as they could accidentally expose too much data. Solving the Relay Object Identification Vulnerability Recalling the issue from above, the problem with the Relay spec comes from two angles. One is incomplete Authentication protection, the other one is protecting edges when protecting nodes would be the correct way to do it. The Relay specification allowed you to query any Node with the globally unique object identifier. This gives developers (and the Relay client) a powerful tool but is also another issue to solve. You might see some repetitiveness here, but the Relay Vulnerability is also covered by the RPC facade. Solving GraphQL Gateway / Proxying Vulnerabilities This is one of the more complicated issues to solve. If we're using user inputs as variables for sub-requests, we have to make sure that these variables are exactly what we expect and not trying to exploit an underlying system. To help mitigate these issues, we've made it very easy to define a JSON Schema definition for variables . This way, you're able to define a Regex pattern or other rules to verify the inputs before injecting them into subsequent requests, database Queries, etc... The type system of GraphQL is absolutely great and very helpful, especially for making the lives of API consumers easier. However, when it comes to interpreting a GraphQL request, there are still a few gaps that we're trying to fix. Solving the GraphQL introspection Vulnerability As we've seen in the description of the issue, disabling GraphQL introspection might not be as easy as it seems, depending on the framework you're using. That said, Introspection is relying on the GraphQL layer. If you look at how Introspection works, it's just another GraphQL Query, even if it's a special one, with quite a lot of nesting. Should I repeat myself again and tell you that by not exposing GraphQL, you're affected by this issue anymore? Keep in mind that we're still allowing Introspection during development. Tools like GraphiQL will keep working, just not in production, or at least not with a special authentication token. Solving the Generated GraphQL APIs Vulnerability If the GraphQL API is a 1:1 copy of your database schema, you're exposing internals of your architecture via GraphQL Introspection. But then again, we've already solved this issue. Solving the GraphQL CSRF Vulnerability The way to mitigate this issue is by properly configuring CORS and SameSite settings on your API. Then, add a CSRF middleware to the API layer. This adds an encrypted CSRF cookie on a per user basis. Once the user is logged in, hand them over their csrf token. Finally, if the user wants to invoke a mutation, they must present their CSRF token in a special header which can then be validated by the CSRF Middleware. If there's anything going wrong, or the user logs out, delete the CSRF cookie to block all mutations until there's a valid user session again. All this might sound a bit complicated, especially the interaction between client and server, sending CSRF tokens and Headers back and forth. That's why we've added all this to WunderGraph by default. All mutations are automatically protected. On the server-side, we've go all the middlewares in place. The client is auto-generated, so it knows exactly how to deal with the CSRF tokens and Headers. Solving the GraphQL Excessive Errors Vulnerability This issue is probably one of the bigger threats while easy to fix. After resolving an Operation, right before sending back the response to the client, make sure to strip out all sensitive information from the errors object. This is especially important if you're proxying to other services. Don't just pipe through everything you've got from the upstream. If you're calling into a REST API and the response code is non 200, don't just return the response as a generic error object. Additionally, think about your API as a product. What should the user experience of you "product" look like in case of an error? Help your users understand the error and what they can do about it, at least for "known good errors". In case of "bad errors", those unexpected ones, don't be too specific to your users, they might not be friendly. Solving the 13 most common GraphQL Vulnerabilities for public APIs Alright, you've seen that by changing our architecture and evolving our understanding of GraphQL, we're able to mitigate almost all of the issues that are coming from GraphQL itself. What's left is the biggest vulnerability of all systems, the people who build them, us, the developers! If the process of debugging means, we're removing bugs, what should we call it when we write code? Okay, we're almost done. We've left out one small group of APIs. We've been talking about private APIs for almost the entire article, but we did it for a good reason, probably 99% of the APIs are private. But what about parter and public APIs? What if we don't want to put an RPC layer in front of our API? What if we want to directly expose GraphQL to our API consumers? Companies like Shopify and GitHub are doing this. What can we learn from them? Shopify currently has 1273 reports solved . They've paid in bounties $1.656.873 to hackers with a range of $500-$50.000 per bounty. Twitter resolved a total of 1364 issues with a total of $1.424.389. Snapchat only paid out $439.067 with 389 reports resolved. GitLab paid an astounding $1.743.639 with a total of 845 issues resolved. These bounties are not just related to GraphQL, but all the companies listed above can be found on the list of reported GraphQL issues. There's a total of 70 reports on GraphQL with lot's of bounties paid out. If you search on other bug bounty websites, you'll probably find more. Companies like GitHub have people who build specialized infrastructure to better understand how their GraphQL API is being used. I was pleased to meet the amazing Claire Knight and listen to her talk on the last GraphQL Summit, it's been quite some time... I've presented you all this data to be able to make two points. First, do you really need to expose your GraphQL API? If not, excellent! You're able to apply all the solutions from this article. If, by all means, you absolutely want to expose a GraphQL API, make sure you have the expertise it takes to do so. You should have security experts in house or at least hire them, you should be doing regular audits and do pen-testing. Don't take this easy! Let's talk APIs and Security! Do you have questions or want to discuss APIs, GraphQL and Security? Meet us on Discord We're also happy to jump on a call with you and give you a Demo. Book now a free meeting! Try it out yourself! Are you interested in how the GraphQL-to-RPC concept works in reality? Have a look at our one minute Quickstart and try out the concepts discussed in this article. Sursa: https://wundergraph.com/blog/the_complete_graphql_security_guide_fixing_the_13_most_common_graphql_vulnerabilities_to_make_your_api_production_ready
  5. Connor McGarr takes us through the state of exploitation and exploit mitigations on modern Windows systems.
  6. Dumping and extracting the SpaceX Starlink User Terminal firmware Towards the end of May 2021 Starlink launched in Belgium so we were finally able to get our hands on a Dishy McFlatface. In this blog post we will cover some initial exploration of the hardware and we will explain how we dumped and extracted the firmware. Note that this blog post does not discuss any specific vulnerabilities, we merely document techniques that can be used by others to research the Starlink User Terminal (UT). Towards the end of this blog post we will include some interesting findings from the firmware. Note that SpaceX actively encourages people to find and report security issues through their bug bounty program: https://bugcrowd.com/spacex We first set up our UT on a flat section of our university building’s roof and played around with it for a few hours, giving the UT and router the chance to perform a firmware update. We did run a few mandatory speed tests and were seeing as much as 268 Mbps download and 49 Mbps upload. Teardown: Level 1 After a few hours of playing around it was time to get our hands dirty and disassemble the UT. There have been a few teardown videos of the UT but none of them went into the details we were interested in: the main SoC and firmware. Nevertheless, these prior teardowns ([1, 2, 3]) of the dish contain a lot of useful information that allowed us to disassemble our dish without too much damage. It appears that there a few hardware revisions of the UT out there by now, certain parts of the teardown process can differ depending on the revision, something we learned the hard way. One of the aforementioned teardown videos shows the Ethernet and motor control cables to be detached from the main board before the white plastic cover is removed. On our UT, a tug on the motor control cables pulled the entire connector from the PCB; luckily it appears we can repair the damage. In other words, do not pull on those cables but first remove the back plastic cover, for those of you in the same boat: JST BM05B-ZESS-TBT. After removing the back plastic cover we can see a metal shield covering the PCB, with the exception of a small cut-out containing the connectors for the Ethernet cable and motor control cable. There is one additional, unpopulated connector (4 pin JST SH 1.0mm), that we assumed would contain a UART debug interface as was shown in [4]. Note that the early teardown videos had an additional connector that is no longer present on our UT. The UART interface After hooking up a USB to serial converter we could get some information on the UT’s boot process. The output contains information regarding the early stage bootloaders before showing the following output. We can see that the UT is using the U-Boot bootloader, and that typing ‘falcon’ may interrupt the boot process. While this may give us access to a U-Boot CLI we can also see that the serial input is configured as ‘nulldev’. Unsurprising, spamming the serial interface with ‘falcon’ during boot did not yield any result. U-Boot 2020.04-gddb7afb (Apr 16 2021 - 21:10:45 +0000) Model: Catson DRAM: 1004 MiB MMC: Fast boot:eMMC: 8xbit - div2 stm-sdhci0: 0 In: nulldev Out: serial Err: serial CPU ID: 0x00020100 0x87082425 0xb9ca4b91 Detected Board rev: #rev2_proto2 sdhci_set_clock: Timeout to wait cmd & data inhibit FIP1: 3 FIP2: 3 BOOT SLOT B Net: Net Initialization Skipped No ethernet found. * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Board: SPACEX CATSON UTERM ====================================== = Type 'falcon' to stop boot process = ====================================== Continuing through the boot process we can see that U-Boot loads a kernel, ramdisk and Flattened Device Tree (FDT) from a Flattened uImage Tree (FIT) image that is stored on an embedded MultiMediaCard (eMMC). We can also see that the integrity (SHA256) and authenticity (RSA 2048) of the kernel, ramdisk and FDT is being checked. While we would have to perform some more tests it appears that a full trusted boot chain (TF-A) is implemented from the early stage ROM bootloader all the way down to the Linux operating system. switch to partitions #0, OK mmc0(part 0) is current device MMC read: dev # 0, block # 98304, count 49152 ... 49152 blocks read: OK ## Loading kernel from FIT Image at a2000000 ... Using 'rev2_proto2@1' configuration Verifying Hash Integrity ... sha256,rsa2048:dev+ OK Trying 'kernel@1' kernel subimage Description: compressed kernel Created: 2021-04-16 21:10:45 UTC Type: Kernel Image Compression: lzma compressed Data Start: 0xa20000dc Data Size: 3520634 Bytes = 3.4 MiB Architecture: AArch64 OS: Linux Load Address: 0x80080000 Load Size: unavailable Entry Point: 0x80080000 Hash algo: sha256 Hash value: 5efc55925a69298638157156bf118357e01435c9f9299743954af25a2638adc2 Verifying Hash Integrity ... sha256+ OK ## Loading ramdisk from FIT Image at a2000000 ... Using 'rev2_proto2@1' configuration Verifying Hash Integrity ... sha256,rsa2048:dev+ OK Trying 'ramdisk@1' ramdisk subimage Description: compressed ramdisk Created: 2021-04-16 21:10:45 UTC Type: RAMDisk Image Compression: lzma compressed Data Start: 0xa2427f38 Data Size: 8093203 Bytes = 7.7 MiB Architecture: AArch64 OS: Linux Load Address: 0xb0000000 Load Size: unavailable Entry Point: 0xb0000000 Hash algo: sha256 Hash value: 57020a8dbff20b861a4623cd73ac881e852d257b7dda3fc29ea8d795fac722aa Verifying Hash Integrity ... sha256+ OK Loading ramdisk from 0xa2427f38 to 0xb0000000 WARNING: 'compression' nodes for ramdisks are deprecated, please fix your .its file! ## Loading fdt from FIT Image at a2000000 ... Using 'rev2_proto2@1' configuration Verifying Hash Integrity ... sha256,rsa2048:dev+ OK Trying 'rev2_proto2_fdt@1' fdt subimage Description: rev2 proto 2 device tree Created: 2021-04-16 21:10:45 UTC Type: Flat Device Tree Compression: uncompressed Data Start: 0xa23fc674 Data Size: 59720 Bytes = 58.3 KiB Architecture: AArch64 Load Address: 0x8f000000 Hash algo: sha256 Hash value: cca3af2e3bbaa1ef915d474eb9034a770b01d780ace925c6e82efa579334dea8 Verifying Hash Integrity ... sha256+ OK Loading fdt from 0xa23fc674 to 0x8f000000 Booting using the fdt blob at 0x8f000000 Uncompressing Kernel Image Loading Ramdisk to 8f848000, end 8ffffe13 ... OK ERROR: reserving fdt memory region failed (addr=b0000000 size=10000000) Loading Device Tree to 000000008f836000, end 000000008f847947 ... OK WARNING: ethact is not set. Not including ethprime in /chosen. Starting kernel ... The remainder of the boot process contains some other interesting pieces of information. For example, we can see the kernel command line arguments and with that the starting addresses and lengths of some partitions. Additionally, we can see that the SoC contains 4 CPU cores. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 [ 0.000000] 000: Detected VIPT I-cache on CPU0 [ 0.000000] 000: Built 1 zonelists, mobility grouping on. Total pages: 193536 [ 0.000000] 000: Kernel command line: rdinit=/usr/sbin/sxruntime_start mtdoops.mtddev=mtdoops console=ttyAS0,115200 quiet alloc_snapshot trace_buf_size=5M rcutree.kthread_prio=80 earlycon=stasc,mmio32,0x8850000,115200n8 uio_pdrv_genirq.of_id=generic-uio audit=1 SXRUNTIME_EXPECT_SUCCESS=true blkdevparts=mmcblk0:0x00100000@0x00000000(BOOTFIP_0),0x00100000@0x00100000(BOOTFIP_1),0x00100000@0x00200000(BOOTFIP_2),0x00100000@0x00300000(BOOTFIP_3),0x00080000@0x00400000(BOOTTERM1),0x00080000@0x00500000(BOOTTERM2),0x00100000@0x00600000(BOOT_A_0),0x00100000@0x00700000(BOOT_B_0),0x00100000@0x00800000(BOOT_A_1),0x00100000@0x00900000(BOOT_B_1),0x00100000@0x00A00000(UBOOT_TERM1),0x00100000@0x00B00000(UBOOT_TERM2),0x00050000@0x00FB0000(SXID),0x01800000@0x01000000(KERNEL_A),0x00800000@0x02800000(CONFIG_A),0x01800000@0x03000000(KERNEL_B),0x00800000@0x04800000(CONFIG_B),0x01800000@0x05000000(SX_A),0x01800000@0x06800000(SX_B),0x00020000@0x00F30000(VERSION_INFO_A),0x00020000@0x00F50000(VERSION_INFO_B),0x00020000 [ 0.000000] 000: audit: enabled (after initialization) [ 0.000000] 000: Dentry cache hash table entries: 131072 (order: 9, 2097152 bytes, linear) [ 0.000000] 000: Inode-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) [ 0.000000] 000: mem auto-init: stack:off, heap alloc:off, heap free:off [ 0.000000] 000: Memory: 746884K/786432K available (6718K kernel code, 854K rwdata, 1648K rodata, 704K init, 329K bss, 39548K reserved, 0K cma-reserved) [ 0.000000] 000: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] 000: ftrace: allocating 23664 entries in 93 pages [ 0.000000] 000: rcu: Preemptible hierarchical RCU implementation. [ 0.000000] 000: rcu: RCU event tracing is enabled. [ 0.000000] 000: rcu: RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=4. [ 0.000000] 000: rcu: RCU priority boosting: priority 80 delay 500 ms. [ 0.000000] 000: rcu: RCU_SOFTIRQ processing moved to rcuc kthreads. [ 0.000000] 000: No expedited grace period (rcu_normal_after_boot). [ 0.000000] 000: Tasks RCU enabled. [ 0.000000] 000: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. [ 0.000000] 000: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 [ 0.000000] 000: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 [ 0.000000] 000: random: get_random_bytes called from start_kernel+0x33c/0x4b0 with crng_init=0 [ 0.000000] 000: arch_timer: cp15 timer(s) running at 60.00MHz (virt). [ 0.000000] 000: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x1bacf917bf, max_idle_ns: 881590412290 ns [ 0.000000] 000: sched_clock: 56 bits at 60MHz, resolution 16ns, wraps every 4398046511098ns [ 0.008552] 000: Calibrating delay loop (skipped), value calculated using timer frequency.. [ 0.016871] 000: 120.00 BogoMIPS (lpj=60000) [ 0.021129] 000: pid_max: default: 32768 minimum: 301 [ 0.026307] 000: Mount-cache hash table entries: 2048 (order: 2, 16384 bytes, linear) [ 0.034005] 000: Mountpoint-cache hash table entries: 2048 (order: 2, 16384 bytes, linear) [ 0.048359] 000: ASID allocator initialised with 32768 entries [ 0.050341] 000: rcu: Hierarchical SRCU implementation. [ 0.061390] 000: smp: Bringing up secondary CPUs ... [ 0.078677] 001: Detected VIPT I-cache on CPU1 [ 0.078755] 001: CPU1: Booted secondary processor 0x0000000001 [0x410fd034] [ 0.095799] 002: Detected VIPT I-cache on CPU2 [ 0.095858] 002: CPU2: Booted secondary processor 0x0000000002 [0x410fd034] [ 0.112970] 003: Detected VIPT I-cache on CPU3 [ 0.113025] 003: CPU3: Booted secondary processor 0x0000000003 [0x410fd034] [ 0.113160] 000: smp: Brought up 1 node, 4 CPUs [ 0.113184] 000: SMP: Total of 4 processors activated. Finally, when the UT completes its boot process we are greeted with a login prompt: Development login enabled: no SpaceX User Terminal. user1 login: While making a few attempts at guessing valid login credentials we started realising that this UART interface would be unlikely to result in an easy win. We had to go deeper. Teardown: Level 2 The back metal cover of the UT is glued to the assembly around the outer edge and additional glue is applied between the ribs in the metal cover and the underlying PCB. To loosen the glue at the edge of the metal cover we used a heat gun, prying tools, isopropyl alcohol and a lot of patience. Specifically, we first applied heat to a small section, used a prying tool to loosen that section, added IPA to help dissolve the glue and another round of the prying tool. Having removed the metal cover we are greeted by an enormous PCB measuring approximately 55 cm in diameter. The parts of interest to us are shown in the picture below. The flip-chip BGA package with the metal lid is the main SoC on this board (marking: ST GLLCCOCA6BF). Unsurprisingly the SoC is connected to some volatile DRAM storage and non-volatile flash storage in the form of an eMMC chip. Identifying eMMC test points An embedded MultiMediaCard (eMMC) contains flash storage and a controller and is quite similar to an SD-card. The UT contains a Micron eMMC chip with package marking JY976, Micron offers a convenient tool to decode these package markings to the actual part number: https://www.micron.com/support/tools-and-utilities/fbga. The eMMC chip in question has part number MTFC4GACAJCN-1M and contains 4GB of flash storage in a BGA-153 package. In most scenarios we would desolder such an eMMC chip, reball it and dump it using a BGA socket. However, in this case we first attempted to dump the eMMC in-circuit to minimize the odds of damaging our UT and the eMMC chip. eMMC chips are similar to SD cards in that they share a similar interface; the eMMC chip does support up to 8 data lines whereas SD-cards support up to 4 data lines. Both eMMC chips and SD-Cards support the use of only a single data line at the cost of lower read/write speeds. To read the eMMC chip in-circuit we have to identify the clock (CLK), command (CMD) and data 0 (D0) signals. The 10 test points above the main SoC drew our attention, as 10 test points could be a CMD, CLK and 8 data lines. Additionally, all of these test points have a 30 Ohm series resistor connected to them which is relatively common for eMMC connections. We soldered a short wire to each test point, allowing us to create a logic analyser capture during the UT boot process. Using such a capture it is relatively straightforward to identify the required signals. The CLK signal will be the only repetitive signal, CMD is the signal that is first active after the clock starts toggling and D0 is the first data line to send out data. Determining the remaining 7 data lines is luckily unnecessary to dump the eMMC contents. Dumping the eMMC in-circuit To dump the eMMC chip we can connect a reader (that supports 1.8V IO) to the identified test points. Commercial readers that are mostly aimed at phone repair exist and should work well for this purpose (e.g. easy-JTAG and Medusa Pro). Alternatively you can use a regular USB SD-card reader (one that supports 1-bit mode) with an SD-card breakout with integrated level-shifters (e.g. https://shop.exploitee.rs/shop/p/low-voltage-emmc-adapter ). You can also whip something up yourself if you have some parts laying around. The picture below shows a standard USB SD-card reader connected to a TI TXS0202EVM level-shifter breakout board. We only provide power to the eMMC to prevent the main SoC from interfering. The eMMC can be powered through two nearby decoupling capacitors, 3.3V is provided by the SD card reader and 1.8V is provided using a lab power supply. Once everything is hooked up properly we can create a disk image for later analysis. Note that reading eMMC in circuit is not always an easy task; wires that are slightly too long can already prevent reading from succeeding. In this case it was rather straightforward and the system appears to function normally even with these relatively long wires attached. Unpacking the raw eMMC dump Unfortunately, Binwalk was not able to extract the full filesystem hence a manual analysis was required. From the boot log it was clear that U-Boot was loading 49152 blocks of data starting at block 98304. Meaning that U-Boot is reading 0x1800000 bytes (blocksize of 512 (0x200) bytes) starting from address 0x3000000. We also know from the U-Boot output that this chunk of data is a FIT image. However, when trying to read the FIT image header information using the dumpimage tool (part of the u-boot-tools package) we weren’t getting any useful information. Luckily SpaceX released their modifications to U-Boot on Github for GPL compliance: https://github.com/SpaceExplorationTechnologies By looking at this code it became clear that certain parts of the firmware are stored in a custom format that contains Error Correcting Code (ECC) data. Stripping Reed-Solomon ECC words The file spacex_catson_boot.h contains interesting information related to how the device boots. The following snippets show how data is being read from eMMC (mmc read8) and the definition for startkernel. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 #define SPACEX_CATSON_COMMON_BOOT_SETTINGS \ "kernel_boot_addr=" __stringify(CATS_KERNEL_BOOT_ADDR) "\0" \ "kernel_load_addr=" __stringify(CATS_KERNEL_LOAD_ADDR) "\0" \ "kernel_offset_a=" __stringify(CATS_KERNEL_A_OFFSET) "\0" \ "kernel_offset_b=" __stringify(CATS_KERNEL_B_OFFSET) "\0" \ "kernel_size=" __stringify(CATS_KERNEL_A_SIZE) "\0" \ "setup_burn_memory=mw.q " __stringify(CATS_TERM_SCRATCH_ADDR) " 0x12345678aa640001 && " \ "mw.l " __stringify(CATS_TERM_LOAD_ADDR) " 0xffffffff " __stringify(CATS_BOOTTERM1_SIZE) " && " \ "mw.l " __stringify(CATS_TERM_TOC_SER_ADDR) " " __stringify(CATS_TERM_TOC_SER_VAL) "\0" \ "startkernel=unecc $kernel_load_addr $kernel_boot_addr && bootm $kernel_boot_addr${boot_type}\0" \ "stdin=nulldev\0" #define SPACEX_CATSON_BOOT_SETTINGS \ SPACEX_CATSON_COMMON_BOOT_SETTINGS \ "_emmcboot=mmc dev " __stringify(CATS_MMC_BOOT_DEV) " " __stringify(CATS_MMC_BOOT_PART) " && " \ "mmc read8 $kernel_load_addr ${_kernel_offset} $kernel_size && " \ "run startkernel\0" \ "emmcboot_a=setenv _kernel_offset $kernel_offset_a && run _emmcboot\0" \ "emmcboot_b=setenv _kernel_offset $kernel_offset_b && run _emmcboot\0" The definition of startkernel is particularly interesting as it shows how the address where the kernel was loaded is being passed to a command called unecc. From the unecc command definition it is quite clear that this functionality is performing error correction on the data read from the eMMC. 1 2 3 4 5 6 7 8 9 10 U_BOOT_CMD( unecc, 3, 0, do_unecc, "Unpacks an ECC volume; increments internal ECC error counter on error", "<source> <target>\n" "\tReturns successfully if the given source was successfully\n" "\tunpacked to the target. This will fail if the given source\n" "\tis not an ECC volume. It will succeed if bit errors were\n" "\tsuccessfully fixed.\n" "\t<source> and <target> should both be in hexadecimal.\n" ); The unecc command calls the do_unecc function implemented in unecc.c. Eventually this will result in calling the ecc_decode_one_pass function defined in ecc.c. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 /** * Decodes an ECC protected block of memory. If the enable_correction * parameter is zero, it will use the MD5 checksum to detect errors and will * ignore the ECC bits. Otherwise, it will use the ECC bits to correct any * errors and still use the MD5 checksum to detect remaining problems. * * @data: Pointer to the input data. * @size: The length of the input data, or 0 to read until the * end of the ECC stream. * @dest: The destination for the decoded data. * @decoded[out]: An optional pointer to store the length of the decoded * data. * @silent: Whether to call print routines or not. * @enable_correction: Indicates that the ECC data should be used * to correct errors. Otherwise the MD5 checksum * will be used to check for an error. * @error_count[out]: Pointer to an integer that will be incremented * by the number of errors found. May be NULL. * Unused if !enable_correction. * * Return: 1 if the block was successfully decoded, 0 if we had a * failure, -1 if the very first block didn't decode (i.e. probably * not an ECC file) */ static int ecc_decode_one_pass(const void *data, unsigned long size, void *dest, unsigned long *decoded, int silent, int enable_correction, unsigned int *error_count) ecc.h contains several relevant definitions: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 #else /* !NPAR */ #define NPAR 32 #endif /* NPAR */ /* * These options must be synchronized with the userspace "ecc" * utility's configuration options. See ecc/trunk/include/ecc.h in the * "util" submodule of the platform. */ #define ECC_BLOCK_SIZE 255 #define ECC_MD5_LEN 16 #define ECC_EXTENSION "ecc" #define ECC_FILE_MAGIC "SXECCv" #define ECC_FILE_VERSION '1' #define ECC_FILE_MAGIC_LEN (sizeof(ECC_FILE_MAGIC) - 1) #define ECC_FILE_FOOTER_LEN sizeof(file_footer_t) #define ECC_DAT_SIZE (ECC_BLOCK_SIZE - NPAR - 1) #define ECC_BLOCK_TYPE_DATA '*' #define ECC_BLOCK_TYPE_LAST '$' #define ECC_BLOCK_TYPE_FOOTER '!' In the end what it boils down to is that, in this implementation, an ECC protected block of memory starts with the magic header value SXECCv followed by a version byte (1). This magic value marks the start of the ECC protected data, but also the start of the header block. The header block itself contains (in addition to the magic value and version byte), 215 bytes of data, an asterisk (*), and 32 bytes of ECC code words. The header block is followed by multiple data blocks. Each of these data blocks is 255 bytes long and contains 222 bytes of data followed by an asterisk symbol (*) and 32 bytes of ECC code words. The last data block contains a dollar sign ($) instead of the asterisk and is followed by a final footer block. This footer block starts with and exclamation mark (!) that is followed by the number of data bytes in the ECC protected block of memory (4-bytes) and MD5 digest over those data bytes. At this point it should be clear why Binwalk did not succeed in extracting the kernel, initramfs and FDT. Binwalk is able to pick up on the magic values that indicate the start of a particular file, but each block of the file had additional data that prevented Binwalk from extracting it. We used a simple Python scrip to remove the extra ECC data before using Binwalk to extract the image. Similarly, we can now also use dumpimage to get more information on the FIT image. The FIT image and board revisions The following snippet contains some of the dumpimage output. The FIT image contains 13 boot configurations, all configurations use the same kernel and initramfs images but a different Flattened Device Tree (FDT). FIT description: Signed dev image for catson platforms Created: Fri Apr 16 23:10:45 2021 Image 0 (kernel@1) Description: compressed kernel Created: Fri Apr 16 23:10:45 2021 Type: Kernel Image Compression: lzma compressed Data Size: 3520634 Bytes = 3438.12 KiB = 3.36 MiB Architecture: AArch64 OS: Linux Load Address: 0x80080000 Entry Point: 0x80080000 Hash algo: sha256 Hash value: 5efc55925a69298638157156bf118357e01435c9f9299743954af25a2638adc2 Image 12 (rev2_proto2_fdt@1) Description: rev2 proto 2 device tree Created: Fri Apr 16 23:10:45 2021 Type: Flat Device Tree Compression: uncompressed Data Size: 59720 Bytes = 58.32 KiB = 0.06 MiB Architecture: AArch64 Load Address: 0x8f000000 Hash algo: sha256 Hash value: cca3af2e3bbaa1ef915d474eb9034a770b01d780ace925c6e82efa579334dea8 Image 15 (ramdisk@1) Description: compressed ramdisk Created: Fri Apr 16 23:10:45 2021 Type: RAMDisk Image Compression: lzma compressed Data Size: 8093203 Bytes = 7903.52 KiB = 7.72 MiB Architecture: AArch64 OS: Linux Load Address: 0xb0000000 Entry Point: 0xb0000000 Hash algo: sha256 Hash value: 57020a8dbff20b861a4623cd73ac881e852d257b7dda3fc29ea8d795fac722aa Default Configuration: 'rev2_proto2@1' Configuration 0 (utdev@1) Description: default Kernel: kernel@1 Init Ramdisk: ramdisk@1 FDT: utdev3@1 Sign algo: sha256,rsa2048:dev Sign value: bb34cc2512d5cd3b5ffeb5acace0c1b3dd4d960be3839c88df57c7aeb793ad73a74e87006efece4e9f1e31edbb671e2c63dc4cdcb1a2f55388d83a11f1074f21a1e48d81884a288909eb0c9015054213e5e74cbcc6a6d2617a720949dcac3166f1d01e3c2465d8e7461d14288f1a0abef22f80e2745e7f8499af46e8c007b825d72ab494f104df57433850f381be793bfe06302473269d2f45ce2ff2e8e4439017c0a94c5e7c6981b126a2768da555c86b2be136d4f5785b83193d39c9469bd24177be6ed3450b62d891a30e96d86eee33c2cbfc549d3826e6add36843f0933ced7c8e23085ee6106e3cc2af1e04d2153af5f371712854e91c8f33a4ea434269 From the U-Boot code (spacex_catson_uterm.c) it becomes clear that the boot configuration is decided based on the state of 5 GPIO pins. 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 /** * Check board ID GPIOs to find board revision. * The board IDs are mapped as follows * id_b0:pio12[2] * id_b1:pio12[3] * id_b2:pio12[0] * id_b3:pio12[1] * id_b4:pio20[4] */ u32 pio12 = readl(BACKBONE_PIO_A_PIO2_PIN); u32 pio20 = readl(BACKBONE_PIO_B_PIO0_PIN); u32 board_id = (((pio12 >> 2) & 1) << 0) | (((pio12 >> 3) & 1) << 1) | (((pio12 >> 0) & 1) << 2) | (((pio12 >> 1) & 1) << 3) | (((pio20 >> 4) & 1) << 4); /* * https://confluence/display/satellites/User+Terminal%3A+Catson+ID+Bits */ switch (board_id) { case 0b11111: board_rev_string = BOARD_REV_1_1P3; break; case 0b11100: board_rev_string = BOARD_REV_1_2P1; break; case 0b11000: board_rev_string = BOARD_REV_1_2P2; break; case 0b10100: board_rev_string = BOARD_REV_1_3P0; break; case 0b10000: /* rev1 pre-production */ board_rev_string = BOARD_REV_1_PRE_PROD; break; case 0b11110: /* rev1 production */ board_rev_string = BOARD_REV_1_PROD; break; case 0b00001: board_rev_string = BOARD_REV_2_0P0; break; case 0b00010: board_rev_string = BOARD_REV_2_1P0; break; case 0b00011: board_rev_string = BOARD_REV_2_2P0; break; } } printf("Detected Board rev: %s\n", board_rev_string); The following picture shows where these pins are being pulled high/low to indicate the board revision. Note from the earlier serial bootlog that our UT boots using the rev2_proto2 configuration (case 0b00011). In a recent video Colin O’Flynn pulled some of these pins high/low and could observe that the UT tried booting using a different FIT configuration and thus different devicetree [5]. We compared some of the FDTs but did not spot any differences that would be interesting from a security perspective. A first look at the firmware The login prompt Recall that after the boot process has completed we are greeted with a login prompt. For further research it would be useful to gain the ability to log in, allowing us to interact with the live system. However, by looking at the shadow file it becomes clear that none of the users are allowed to log in. During boot the UT does read a fuse to determine if it is development hardware or not. If the UT is unfused it will set a password for the root user, allowing log in. Starlink UTs that are sold to consumers are of course production fused, disabling the login prompt. 1 2 3 4 5 6 7 8 9 10 root:*:10933:0:99999:7::: bin:*:10933:0:99999:7::: daemon:*:10933:0:99999:7::: sync:*:10933:0:99999:7::: halt:*:10933:0:99999:7::: uucp:*:10933:0:99999:7::: operator:*:10933:0:99999:7::: ftp:*:10933:0:99999:7::: nobody:*:10933:0:99999:7::: sshd:*::::::: Development hardware Development hardware often finds its way into the wrong hands [6, 7]. The engineers at SpaceX considered this scenario and appear to try to actively detect unfused development hardware that is no longer under their control. Development hardware is geofenced to only work in certain predefined areas, most of which are clearly SpaceX locations. SpaceX is likely notified if development hardware is used outside these predefined geofences. Interestingly, some of these geofences do not seem to have a clear connection to SpaceX. While we will not disclose these locations here, I will say that the SNOW_RANCH looks like a nice location to play with development hardware. Secure element From references in the firmware it became clear that (our revision of) the UT contains a STMicroelectronics STSAFE secure element. The purpose of the secure element is not entirely clear yet, but it may be used to remotely authenticate the UT. The SoC Some people have asked which processor is being used: the answer is a Quad-Core Cortex-A53 and each core has been assigned a specific task. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ############################ # System Information ############################ # # The user terminal phased-array computers are Catson SoCs with a quad-core # Cortex-A53. # # We dedicate one core to control, while leaving the other three to handle # interrupts and auxiliary processes. # # CPU 0: Control process. # CPU 1: Lower-MAC RX process. # CPU 2: Lower-MAC TX process. # CPU 3: PhyFW and utility core - interrupts, auxiliary processes, miscellaneous What’s next That’s it for now. We will likely continue looking into the Starlink UT and provide more details in future blog posts if there is interest. At the time of writing we were able to obtain a root shell on the UT, but it’s too early to publicly share more information on that matter. References [1] MikeOnSpace – Starlink Dish TEARDOWN! – Part 1 – https://youtu.be/QudtSo5tpLk [2] Ken Keiter – Starlink Teardown: DISHY DESTROYED! – https://youtu.be/iOmdQnIlnRo [3] The Signal Path – Starlink Dish Phased Array Design, Architecture & RF In-depth Analysis – https://youtu.be/h6MfM8EFkGg [4] MikeOnSpace – Starlink Dish TEARDOWN! – Part 2 – https://youtu.be/38_KTq8j0Nw [5] Colin O’Flynn – Starlink Dishy (Rev2 HW) Teardown Part 1 – UART, Reset, Boot Glitches – https://youtu.be/omScudUro3s [6] Brendan I. Koerner – The Teens Who Hacked Microsoft’s Xbox Empire – https://www.wired.com/story/xbox-underground-videogame-hackers/ [7] Jack Rhysider – Darknet Diaries EP 45: XBOX UNDERGROUND (PART 1) – https://darknetdiaries.com/episode/45/ Sursa: https://www.esat.kuleuven.be/cosic/blog/dumping-and-extracting-the-spacex-starlink-user-terminal-firmware/
      • 1
      • Upvote
  7. (CVE-2022-41352) Zimbra Unauthenticated RCE CVE-2022-41352 is an arbitrary file write vulnerability in Zimbra mail servers due to the use of a vulnerable cpio version. CVE-2022-41352 (NIST.gov) CVE-2022-41352 (Rapid7 Analysis) Affected Zimbra versions: Zimbra <9.0.0.p27 Zimbra <8.8.15.p34 (Refer to the patch notes for more details.) Remediation: In order to fix the vulnerability apply the latest patch (9.0.0.p27 and 8.8.15.p34 respectively) - or install pax and restart the server. Usage: You can either use flags or manipulate the default configuration in the script manually (config block at the top). Use -h for help. $ python cve-2022-41352.py -h $ vi cve-2022-41352.py # Change the config items. $ python cve-2022-41352.py manual # This will create an attachment that you can then send to the target server. # The recipient does not necessarily have to exist - if the email with the attachment is parsed by the server the arbitrary file write in cpio will be triggered. Example: Demo: zimbra-rce-demo-cve-2022-41352.mp4 About Zimbra <9.0.0.p27 RCE Sursa: https://github.com/Cr4ckC4t/cve-2022-41352-zimbra-rce
      • 1
      • Thanks
  8. Welcome!
  9. RDP sau TeamViewer ar fi cele mai simple idei. Sau in sensul in care "controlul" nu trebuie sa fie vizibil?
  10. Probabil exista dar dupa cum ziceai si tu, "moda" asta cu keyloggere si alte porcarii a murit acum mult timp. Cauta "keylogger" pe Google si probabil gasesti cetva variante. Nu e nevoie de trimis pe mail, log-urile pot ajunge oriunde pe Internet prin milioane de metode.
  11. Salut, e posibil sa nu fie relevanta partea de procesor/CPU: https://answers.microsoft.com/en-us/windows/forum/all/solved-the-screen-says-locking-and-it-goes-into/cf8de14e-2178-4b7d-8159-a0b08552e2ce PS: Nu am vazut niciodata acel screen
  12. Guvernul din Marea Britanie scanează fiecare dispozitiv conectat la internet pentru a descoperi vulnerabilități Tudor Bostan Guvernul din Marea Britanie a început să scaneze toate dispozitivele conectate la internet de pe teritoriul Regatului Unit pentru a descoperi eventuale vulnerabilități și pentru a judeca gradul de pregătire pe care țara o are în fața unui atac cibernetic masiv. National Cyber Security Centre (NCSC) este autoritatea guvernamentală care se ocupă de siguranța cibernetică din Marea Britanie și acest proiect a fost pornit la începutul lunii noiembrie. Scanările funcționează cu ajutorul unor request-uri trimise către servere și către dispozitive, iar informațiile precum răspunsul la request, data, timpul și adresa de IP a calculatorului/telefonului/serverului sunt notate într-o bază de date. Apoi, NCSC analizează datele pentru a vedea dacă versiuni ale diferitor programe sau aplicații în care au fost găsite vulnerabilități se regăsesc în rezultatele obținute în urma request-urilor trimise. Aceste scanări au rolul de a stabili cât de pregătit ar fi Regatul Unit în fața unui atac cibernetic. Conform unei postări pe blog-ul oficial NCSC scris de un oficial al autorității guvernamentale, scanările efectuate de guvern sunt foarte similare cu cele efectuate de companiile private de cyber securitate. Mai mult, se pare că aceste scanări vor fi din ce în ce mai complexe, cu fiecare val nou de informații, iar asta va ajuta NCSC să protejeze spațiul digital al țării mai eficient. De menționat este faptul că oricine poate trimite un mail la NCSC în care să ceară ca anumite adrese de IP să nu fie scanate. Asta înseamnă că, dacă ești în Marea Britanie și nu vrei ca guvernul de acolo să îți mai verifice calculatorul sau telefonul din când în când, tot ce trebuie să faci este să le dai de veste pe mail, iar ei îți vor lăsa intimitatea în pace. Între timp, în România: Sursa: https://zonait.ro/marea-britanie-scanare-internet/?
  13. https://def.camp/how-community-involvement-shapes-cybersecurity-careers-and-mindsets/
  14. M-as astepta sa fie parametru pentru functia care ajunge apelata cand e folosit acel syscall, dar nu stiu exact. Cauta alte tutoriale care prezinta si cum se folosesc argumentele, acel toturial pare basic (ca si multe altele pe care le-am gasit rapid). Daca nu, vezi in cod (kernel) exemple de syscalls care au parametri si te inspiri de acolo. Exemplu cu string: https://brennan.io/2016/11/14/kernel-dev-ep3/
  15. Mai e putin, o sa fim usor de recunoscut: grupuri cu glume proste, non-work related (sau semi - e.g. ce buna e tipa aia, mi-as injecta shellcode-ul in ea), care probabil beau si nu le pasa de ce se intampla in jur. Veniti la noi (de preferat nu cu mana goala, ceva de beut, orice e apreciat).
  16. Nu il foloseste nimeni, intreaba pe forum daca vrei sa afli ceva ca te pot ajuta mai multe persoane.
  17. Salut, ai foarte multe resurse la dispozitie pe Internet, inclusiv aici pe forum in zonele tehnice (e.g. Tutoriale engleza) dar ai nevoie de o baza. Eu iti recomand cartea "Introduction to Penetration Testing" care acopera destul de multe lucruri. Dupa ce prinzi bazele o sa iti fie mai usor sa alegi o cale. Phishing-ul nu cred ca este ceea ce trebuie, nici nu e nevoie de un tool pentru asa ceva. Si e ceva ce poate duce usor la probleme. Spune-ne ce urmaresti de fapt si incercam sa te ajutam, am vazut ca esti tanar si riscul sa o iei pe un drum gresit si sa ai probleme e mare. Nu face prostii ca nu are sens, nu merita.
  18. Incearca un alt browser, gen Internet Explorer, e posibil ca un browser modern sa nu foloseasca o versiune veche de SSL/TLS. Asta poti gasi si in setarile browserului (desi nu prea pare). Ma gandesc ca routerul vine cu SSL2/3... In Firefox scrie: about:config si cauta "tls" acolo ai "security.tls.version.min". Schimba valoarea in "1" sau "0" si vezi daca merge asa.
  19. Evolueaza si spamul, nice.
  20. Daca ai un training set bun (supervised) in care marchezi multe imagini pe aceste categorii ar trebui sa le invete cat de cat. Nu ma pricep la subiect dar ar trebui sa fie facubil, acuratetea depinde de model.
  21. Pe Youtube ai putea face insa e greu sa ajungi sa castigi bine, sunt destul de puitini care castiga bine pe Romania. In rest din punctul meu de vedere nu merita efortul, cel mai simplu, iti iei un job si nu ai griji.
  22. Nu recomand un RAT care nu stii ce face. O aplicatie gen Bitdefender sau ceva de incredere care ofera aceste functionalitati, desi costa cativa dolari, poate merita banii.
  23. Solutia ideala: Iti alegi in sala/laborator un loc in mijloc si intorci capul spre monitoarele celorlati. Solutia mai dificila nu prea exista. Fiind o retea locala poti face Man in The Middle dar nu poti prinde traficul criptat deoarece TLS. Ai avea nevoie sa generezi un root CA pe care sa il instalezi pe toate calculatoarele si la accesarea unui site sa generezi la runtime un server certificate valid (semnat de acel root CA) pentru ce cauta user-ul. Asta daca nu se fac validari mai detaliate de certificate (SSL/TLS pinning). Asta daca nu ai probleme cu MiTM-ul (mai merge ARP spoofing?). Insa si asa trebuie targetat atacul ca daca vin 20 de PC-uri cu trafic spre tine si vrei sa faci toate porcariile astea ai nevoie totusi de un PC puternic. Hint: Nu e ceva user-friendly, sa descarci ceva program, sa dai 2 click-uri si bum, Hackerman. Daca iti iese ai o bere de la mine.
  24. Cumperi DNS (daca e folosit de mai multi) sau il pui in etc/hosts. Certificat valid poti obtine free cu letsencrypt (IP public, DNS valid) sau self-signed dar inutil cu openssl.
×
×
  • Create New...