Jump to content
Nytro

How WhatsApp was Hacked by Exploiting a Buffer Overflow Security Flaw

Recommended Posts

Posted

How WhatsApp was Hacked by Exploiting a Buffer Overflow Security Flaw

Try it Yourself

 
Adversary header

 

WhatsApp has been in the news lately following the discovery of a buffer overflow flaw. Read on to experience just how it happened and try out hacking one yourself.
clock.png10 MINUTE READ

WhatsApp entered the news early last week following the discovery of an alarming targeted security attack, according to the Financial Times. WhatsApp, famously acquired by Facebook for $19 billion in 2014, is the world’s most-popular messaging app with 1.5 billion monthly users from 180 countries and has always prided itself on being secure. Below, we’ll explain what went wrong technically, and teach you how you could hack a similar memory corruption vulnerability.

First, what’s up with WhatsApp security?

WhatsApp has been a popular communication platform for human rights activists and other groups seeking privacy from government surveillance due to the company’s early stance on providing strong end-to-end encryption for all of its users. This means, in theory, that only the WhatsApp users involved in a chat are able to decrypt those communications, even if someone were to hack into the systems running at WhatsApp Inc. (a property called forward secrecy). An independent audit by academics in the UK and Canada found no major design flaws in the underlying Signal Messaging Protocol deployed by WhatsApp. We suspect that the company’s security eminence and focus on baking in privacy comes from the strong security mindset of WhatsApp founder Jan Koum who grew up as a hacker in the w00w00 hacker clan in the 1990s.

But WhatsApp was then used for … surveillance?

WhatsApp’s reputation as a secure messaging app and its popularity amongst activists made the report of a 3rd party company stealthily offering turn-key targeted surveillance against WhatsApp’s Android and iPhone users all the more disconcerting. The company in question, the notorious and secretive Israeli company the NSO Group, is likely an offshoot of Unit 8200 that was allegedly responsible for the Stuxnet cyberattack against the Iranian nuclear enrichment program, and has recently been under fire for licensing its advanced Pegasus spyware to foreign governments, and allegedly aiding the Saudi regime spy on the journalist Jamal Khashoggi. The severe accusations prompted the NSO co-founder and CEO to give a rare interview with 60 Minutes about the company and its policies. Facebook is now considering legal options against NSO. The initial fear was that the end-to-end encryption of WhatsApp had been broken, but this turned out not to be the case.

So what went wrong?

Instead of attacking the encryption protocols used by WhatsApp, the NSO Group attacked the mobile application code itself. Following the adage that the chain is never stronger than its weakest link, reasonable attackers avoid spending resources on decrypting communications of their target if they could instead simply hack the device and grab the private encryption keys themselves. In fact, hacking an endpoint device reveals all the chats and dialogs of the target and provides a perfect vantage point for surveillance. This strategy is well known: already in 2014, the exiled NSA whistleblower Edward Snowden hinted at the tactic of governments hacking endpoints rather than focusing on the encrypted messages.

According to a brief security advisory issued by Facebook, the attack against WhatsApp was a previously unknown (0-day) vulnerability in the mobile app. A malicious user could initiate a phone call against any WhatsApp user logged into the system. A few days after the Financial Times broke the news of the WhatsApp security breach, researchers at CheckPoint reverse engineered the security patch issued by Facebook to narrow down what code might have contained the vulnerability. Their best guess is that the WhatsApp application code contained what’s called a buffer overflow memory corruption vulnerability due to insufficient checking of length of data.

I’ve heard the term buffer overflow. But I don’t really know what it is.

To explain buffer overflows, it helps to think about how the C and C++ programming languages approach memory. Unlike most modern programming languages, where the memory for objects is allocated and released as needed, a C/C++ program sees the world as a continuum of 1-byte memory cells. Let’s imagine this memory as a vast row of labeled boxes, sequentially from 0.

storage.jpg

(Photo by Samuel Zeller on Unsplash)

Suppose some program, through dynamic memory allocation, opts to store the name of the current user (“mom”) as the three characters “m”, “o” and “m” in boxes 17000 to 17002. But other data might live in boxes 17003 and onwards.

memory.png

A crucial design decision in C and C++ is that it is entirely the responsibility of the programmer that data winds up in the correct memory cells -- the right set of boxes. Thus if the programmer accidentally puts some part of “mom” inside box 17003, neither the compiler nor the runtime will complain. Perhaps they typed in “mommy”. The program will happily place the extra two characters into boxes 17003 and 17004 without any advance warning, overwriting whatever other potentially important data lives there.

memory2.png

But how does this relate to security?

Of course, if whatever memory corruption bug the programmer introduced always puts the data erroneously into the extra two boxes 17003 and 17004 with the control flow of the program always impacted, then it’s highly likely that the programmer has already discovered their mistake when testing the program -- the program is bound to fail each time, afterall. But when problems arise only in response to certain unusual inputs, the issues are far more likely to have failed the sniff test and persisted in the code base.

Where such overwriting behavior gets interesting for hackers is when the data in box 17003 is of material importance for the program to figure out how the program should continue to run. The formal word is that the overwritten data might affect the control flow of the application. For example, what if boxes 17003 and 17004 contain information about what function in the program should be called when the user logs in? (In C, this might be represented by a function pointer; in C++, this might be a class member function). Suddenly, the path of the program execution can be influenced by the user. It’s like you could tell somebody else’s program, “Hey, you should do X, Y and Z”, and it will abide. If you were the hacker, what would you do with that opportunity? Think about it for a second. What would you do?

I would … tell the program to .. take over the computer?

You would likely choose to steer the program into a place that would let you get further access, so that you could do some more interactive hacking. Perhaps you could make it somehow run code that would provide remote access to the computer (or phone) on which the program is running. This choice of a payload is the craft of writing a shellcode (code that boots up a remote UNIX shell interface for the hacker, get it?)

Two key ideas make such attacks possible. The first is that in the view of a computer, there is no fundamental difference between data and code. Both are represented as a series of bits. Thus it may be possible to inject data into the program, say instead of the string “mommy”, that would then be viewed and executed as code! This is indeed how buffer overflows were first exploited by hackers, first hypothetically in 1972 and then practically by MIT’s Robert T. Morris’s Morris worm that swept the internet in 1988 and Aleph One’s 1996 Smashing the Stack for Fun and Profit article in the underground hacker magazine Phrack.

The second idea, which was crystalized after a series of defenses made it difficult to execute data introduced by an attacker as code, is to direct the program to execute a sequence of instructions that are already contained within the program in a chosen order, without directly introducing any new instructions. It can be imagined as a ransom note composed of letter cutouts from newspapers without the author needing to provide any handwriting.

code-reuse.jpg

(Image generated with Ransomizer.com)

Such methods of code-reuse attacks, the most prominent being return-oriented programming (ROP), are the state-of-the-art in binary exploitation and the reason why buffer overflows are still a recurring security problem. Among the reported vulnerabilities in the CVE repository, buffer overflows and related memory corruption vulnerabilities still accounted for 14% of the nearly 19,000 vulnerabilities reported in 2018.

graph.png

Ahh… but what happened with WhatsApp?

What the researchers at CheckPoint found by dissecting the WhatsApp security patch were the following highlighted changes to the machine code in the Android app. The code is in the real-time video transmission part of the program, specifically code that pertains to the exchange of information of how well the video of a video call is being received (the RTP Control Protocol (RTCP) feedback channel for the Real-time Transmission Protocol).

checkpoint.png

(Image credit: CheckPoint Research)

The odd choices for variable names and structure are artifacts from the reverse engineering process: the source code for the protocol is proprietary. The C++ code might be a heavily modified version of the open-source PJSIP routine that tries to assemble a response to signal a picture loss (PLI) (code is illustrative):

int length_argument = /* from incoming RTCP packet */
qmemcpy( &outgoing_rtcp_payload[ offset ], incoming_rtcp_packet, length_argument );
/* Continue building RTCP PLI packet and send */

But if the remaining size of the payload buffer (after offset) is less than the length_argument, a number supplied by the hacker, information from the incoming packet would be shamelessly copied by memcpy over whatever data surrounds outgoing_rtcp_payload ! Just like the situation with the buffer overflow before, these overwritten data could include data that could later direct the control flow of the program, like an overwritten function pointer.

In summary (coupled with speculation), a hacker would initiate a video call against an unsuspecting WhatsApp user. As the video channel is being set up, the hacker manipulates the video frames being sent to the victim to force the RTCP code in their app to signal a picture loss (PLI), but only after specially crafting the sent frame so that the lengths in the incoming packet will cause the net size of the RTCP response payload to be exceeded. The control flow of the program is then directed towards executing malicious code to seize control of the app, install an implant on the phone, and then allow the app to continue running.

Try it yourself?

Buffer overflows are technical flaws, and build on an understanding of how computers execute code. Given how prevalent they are, and important -- as illustrated by the WhatsApp attack, we believe we should all better understand how such bugs are exploited to help us avoid them in the future. In response, we have created a free online lab that puts you in the shoes of the hacker and illustrates how memory and buffer overflows work when you boil them down to their essence.

Do you like this kind of thing? 
Go read about how Facebook got hacked last year and try out hacking it yourself.

Learn more about our security training platform for developers at adversary.io

 

Sursa: https://blog.adversary.io/whatsapp-hack/

  • Upvote 1

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...