[insert coffee break here -- for me at least, you go ahead and read the previous article ;-)]

## The XPACK Unpacker Stub

Welcome back. So now that you know the basic concept of how the unpacking stub works I will go through to the central unpacking steps. From now on, I will call the packer/unpacker XPACK, as at a certain point the string "XPXAXCXK" appears. This string is too complicated to write all the time and PACK seems to be too generic, hence the name. BTW: this guy also calls it XPACK ;-)The code examples and addresses will refer to the sample with MD5 hash d5b0a448f59c3e946255333ad0ef5cc5.

NB: you can't just disassemble the sample and go to the addresses, as the actual unpacking stub is encrypted, as can be seen in my last post.

### Chunk Assembly

The first stage of the XPACK unpacker stub (function at address 004043E4) sifts byte by byte through the whole binary and searches for data chunks. These chunks have a special format, which has the following format:`struct Chunk{`

unsigned char magic[8];

ULONG hash;

ULONG unknown;

ULONG dechunkedOffset;

ULONG size;

unsigned char chunkData[];

}

The data structure is somewhat interesting, as the magic is not a fixed number, but has to meet certain conditions. The conditions are:

- neither of the first 4 bytes must be 0
- (byte 0) XOR (byte 1) OR (byte 2) must be equal to (byte 4)
- (byte 1) XOR (byte 2) OR (byte 3) must be equal to (byte 5)
- (byte 2) XOR (byte 3) OR (byte 0) must be equal to (byte 6)
- (byte 0) XOR (byte 3) OR (byte 1) must be equal to (byte 7)

ULONG modifiedCrc(const unsigned char *memArray)

{

int hashTable[256];

unsigned int hash;

for (size_t i = 0; i < 256; i++)

hashTable[i] = genHashTableElement(i);

hash = 0xffffffff;

for (size_t i = 0; i < 8; i++)

hash = (hash >> 8) ^ hashTable[(hash ^ memArray[i]) & 0xff];

return hash ^ 0xffffffff;

}

Yes, boring code, you are right ;-) I took the liberty of cleaning up the code a little, that's why. Of special interest is the function

`genHashTableElement`

, as it contains code specific for each sample. It starts at 00403B45 and again, after some cleaning and slight modifications it roughly looks like this:

#define previous(x) (x + 0x789ADA71) ^ 0x75683572

#define next(x) ((x ^ 0x75683572) - 0x789ADA71)

ULONG genHashTableElement(const ULONG v1)

{

v1 = previous(v1);

for (size_t i = 0; i < 8; i++)

{

if ( next(v1) & 1 )

v1 = (next(v1) >> 1) ^ 0xEDB88320;

else

v1 = next(v1) >> 1;

}

return v1;

}

previous and next are sample-specific functions reversing each others' operations. You can easily see that

next(previous(x)) = x

for any 32-bit integer. The functions are closely related to the obfuscation algorithm and probably have been introduced to make reversing harder. Nevertheless, we can rewrite the code to:

`ULONG genHashTableElement(const ULONG v1)`

{

for (size_t i = 0; i < 8; i++)

{

if ( v1 & 1 )

v1 = (v1 >> 1) ^ 0xEDB88320;

else

v1 >>= 1;

if (i == 7)

return v1;

v1 = next(v1);

}

}

This is pretty much the standard CRC-32 table generation algorithm, with the addition of the next call during each round but the very last. I don't know if that was the original intent of the authors or just a glitch in the implementation of the obfuscation engine, but the result is a polymorphic hash function.

Once both, the preconditions and hash, match the input values, the data chunk of size size is copied to a target buffer at offset dechunkedOffset. Thus, the chunks don't have to be in a particular order in the binary. This procedure is repeated throughout the whole binary, and magically, all chunks fit neatly together in the target buffer.

### Decryption (1)

The next step in the unpacking process of XPACK is a decryption layer (loop at 0040145F). In this sample, the decryption consists of the following code:

for (size_t i = 0; i < s.length(); i++)

s[i] = s[i] - i % 0x5a;

In other samples, the code is slightly more complicated, but always has the pattern

s[i] += a - (i % b);

with a and b being constant byte values.

### Base-64 Decoding

As a result of the first decryption, a standard base-64 encoded string appears, which is simply decoded. In the presented sample, this happens in the function at address 00404434.### Decryption (2)

After the base-64 decoding, yet another decryption layer is employed. It is divided into two stages:- A simple addition/subtraction round is applied on the data blob, which you can see in the following image:

XPACK decryption stage 2a

In high-level language, the code looks like

void decrypt2a(unsigned char *buffer, DWORD len)

{

unsigned int currentBlock = 0;

if ( !len )

return;

do

{

for (size_t i = 0; i < 16 && i + currentBlock < len; i++)

buffer[i + currentBlock] += -2 - ((i + currentBlock) & 3) *

`((i + currentBlock) & 0x1F) * ((currentBlock >> 4) & 3);`

for (size_t i = 0; i < 8 && i + currentBlock < len; i++)

buffer[i + currentBlock] -= i;

currentBlock += 64;

}

while ( currentBlock < len );

}

- The buffer is XORed with a constant byte and yet another constant byte is added/subtracted (loop at address 00401582):

`for (size_t i = 0; i < s.length(); i++)`

s[i] = (s[i] ^ 0xA1) - 0x6E;

Again, these constants are specific for this sample and differ from sample to sample. The general way XPACK implements this decryption layer is

`s[i] = (s[i] ^ c) - d;`

with c and d being bytes randomly chosen supposedly during packing of the original executable.

### Decompression

As result of the last decryption layer the following data structure emerges:`struct XpackStruct{`

unsigned char magic[8];

ULONG decompressedLength;

ULONG compressedLength;

unsigned char compressedBuffer[];

}

magic is our long-awaited string "XPXAXCXK", compressedLength represents the length of the compressed buffer, while decompressedLength defines the length of the data once it is unpacked. After that, the structure holds the compressed data.

Hence, the last step in the unpacking process is decompressing the buffer. In the presented sample, the function can be found at address 00401903. The algorithm is obviously statically linked into the binary as it does not bear the obfuscation methods found throughout the sample. Furthermore, it seems to be either compiled for minimal size or hand-crafted in assembler language, as some very uncommon patterns such as calls into the middle of functions etc. can be found.

Unfortunately, I have no further information about the decompression algorithm would be happy if anyone could tell me. For a proper code listing, take a look at Decompressor.cpp at my GitHub page.

## Creating a Generic Unpacker

In order to properly, and, most of all, generically, unpack XPACK-packed code, we have to overcome all of the challenges the authors implemented. In the following section, I will point out the weaknesses in each stage of the XPACK implementation. Furthermore, several ways of exploiting these weaknesses will be presented and my choices will be explained. So, let's cut to the chase.### Chunk Assembly

As the results of the hash function differ from sample to sample due to the CRC table generation (remember, the constant values of`next(x) ((x ^ 0x75683572) - 0x789ADA71) `

differ in each sample), we are presented several choices:- skip the validation part of the hash function
- search for possible constants in question throughout the original binary and basically brute-force them until we find a matching hash
- decrypt the unpacker stub and search there for the hash function, then extract the constant

004046FF sub eax, 789ADA71h

So we can skip option 3, as we already hold potential constants in our hands.

On the other hand, why bother if the easiest solution works? A short evaluation of this possibility revealed only very few false positives. For example, when all magic bytes are the same,

(a ^ a) | a == a

always is true. However, this problem is easily overcome by adding another constraint, that no all bytes must be the same.

Hence, I have chosen the first method of finding the chunks without the need of checking the hash function, and so far it works flawlessly.

The idea of the XPACK authors to introduce a polymorphic header was good and new to me. But exactly the characteristics the authors introduced as initial constraints have proven as an easily exploitable weakness. Thus, the additional protection by means of a polymorphic hash function based on CRC-32 has become completely obsolete. You could consider that an implementation failure ;-)

### Decryption (1)

We need to understand the algorithm employed by the first decryption layer if we want to solve it in a generic way. A bit of basic cryptanalysis is helpful here. Let's recap the algorithm:`for (size_t i = 0; i < s.length(); i++)`

s[i] += a - (i % b);

with a and b being byte values. Hence, we can see the algorithm as two steps:

- add a constant to each byte
- add a counter modulo a constant to each byte

- iterate modulus from 2..256
- apply
`s[i] += i % b`

to each byte of the buffer - count the number of different occurring byte values
- stop if the number at most 65

`-`

`(i % b)`

to the values and thus have removed the modulo-counter. Hence we know that if we hit the correct modulus, we only have the part `s[i] += a`

in our buffer. But this is just a "rotation" operation to each byte, it does not change the number of different values that occur in the data. And yes, the number 65 is correct as the base-64 alphabet contains a padding character usually being "=".As the size of the packed executable is in the range of some 100 KiB, I chose to only use the first KiB due to performance reasons. A couple of dozen bytes should do the trick as well.

As a result of the analysis, we have successfully derived b and can now apply the reverse function, that is adding

`(i % b)`

, to our buffer:`s[i] = s[i] + a - (i % b) + (i % b)`

which now becomes`s[i] = s[i] + a`

Now to step 1: We now hold the nearly-base-64 code and make the assumption that the + operator smoothly "rotates" the value over the -1..0 limit. Hence, we can create a (binary) histogram over the byte range where we already removed the modulus operation. I call it binary histogram as we are only interested

**if**a byte occurs in the buffer or not.

Then, we calculate that binary histogram over the well-known base-64 alphabet and simply rotate it until both histograms match. The number of rotation operations gets us offset

`a`

!Summarizing, we derived the modulus

`b`

by brute-forcing over all possible byte values until we get a buffer of at most 65 different characters. We then apply the reverse modulus function. After that, we generate the binary histograms over the well-known base-64 alphabet and the input values in order to match them, rotating one of the histograms element-wise (that is byte-wise). Once we find a match, we have successfully derived the offset `a`

.We could have tried the naive approach, that is to brute-force each combination of

`a`

and `b`

, until we have found a buffer that contains only bytes of the base-64 alphabet. The complexity of the naive approach, however, is roughly in 256², while the presented approach uses at most about 256+256 loop iterations. Hence, unpacking becomes way faster ;-)Again, the authors had a good idea but failed to implement it properly.

### Base-64 Decoding

The base-64 decoding is straightforward. Also, the alphabet is standard, so no further comments are necessary in this section...### Decryption (2)

Again, stage "2a" of the decryption process is straightforward and we can utilize the extracted code.Remembering stage "2b" of the decryption, we have two byte variables:

` s[i] = (s[i] ^ c) - d;`

with c and d being byte constants randomly chosen at packing the original executable. We could use the naive approach and iterate over c and d, but that takes quite a long time and I have something way more nicer up my sleeve: differential cryptanalysis. Although it seems to be a bit pretentious to call it that way, my method can indeed be seen as it.

We can rely on the fact that the first eight bytes of the decrypted buffer have to equal "XPXAXCXK". First, we try to get a valid XOR value

`c`

. So, brute-force over all possible values for `c`

and check if the distance of magic[0] and magic[1] equals the distance of 'X' - 'P' and so on for the other seven characters. This will result in several possible values for `c`

and we have to test each value.Getting the offset, i.e. the value for

`d`

is rather trivial: just calculate'X' - (magic[0] ^ c)

and we're done. As we relied on the output of magic being "XPXAXCXK" we only have to check the remaining fields for validity. The only field we can utilize for that purpose is

`decompressedLength`

, as we don't know the uncompressed length of the binary data.So, how many potential solutions do we get and how many of these potential solutions will lead to a correct decompression of the binary? I will leave these questions for the interested reader to answer ;-)

In summary, we again employed basic cryptanalysis in order to retrieve the values for c and d, pushing down the complexity from 256² to 256 + 256. But this time, I won't bash the XPACK authors again...

### Decompression

The decompression is simple: just use the code ripped from the XPACK unpacker. It works like a charm!## Conclusion

Writing a generic static unpacker for XPACK surely was fun. The XPACK creators went through a variety of painstaking efforts such as- spreading the packed contents in chunks all over the binary
- no fixed header constants
- polymorphic check sums
- several layers of polymorphic encryption
- compression and base-64 encoding

A quick experiment over a collection of binaries revealed about a dozen of binaries packed with XPACK, of which 3 I previously had not recognized as such in a manual inspection. The unpacker is available at my GitHub page. As usual, the code has been developed test-driven. I'm really thankful I learned this technique as it makes the development process way easier. And not only that: reverse engineering becomes easier, too, as it often relies on the ability to develop agile. Just try it!

## Keine Kommentare:

## Kommentar veröffentlichen