This website contains ALL LeetCode **Premium** problems for
**FREE!!**.

All leaked interview problems are collected from Internet.

All leaked interview problems are collected from Internet.

The Hamming distance between two integers is the number of positions at which the corresponding bits are different.

Now your job is to find the total Hamming distance between all pairs of the given numbers.

**Example:**

Input:4, 14, 2Output:6Explanation:In binary representation, the 4 is 0100, 14 is 1110, and 2 is 0010 (just showing the four bits relevant in this case). So the answer will be: HammingDistance(4, 14) + HammingDistance(4, 2) + HammingDistance(14, 2) = 2 + 2 + 2 = 6.

**Note:**

- Elements of the given array are in the range of
`0`

to`10^9`

- Length of the array will not exceed
`10^4`

.

b'

\n## Solution

\n

\n#### Approach #1 Brute Force [Time Limit Exceeded]

\n\n

\n#### Approach #2 Loop over the bits! [Accepted]

\n

\n#### Bonus!

\n

\n

'
\n\n

\n\n

**Intuition**

- \n
- Check all the unique pairs of elements for differing bits. \n
`xor`

of two bits is`1`

if they are not the same, and`0`

otherwise. \n

**Algorithm**

Bitwise `xor`

of two numbers will give a bitwise representation of where the bits of two numbers differ. Such bit positions are represented by a `1`

bit in the resultant. For example:

0010 1101 == 45\n ^ 0100 1010 == 74\n----------------\n 0110 0111\n ^^ ^^^ => 5 differing bits\n

Hence the numbers and have five differing bits. Thus the Hamming Distance between them is .

\nFor each of the pairs of elements, simply apply bitwise `xor`

to them to find out a resultant representation which tells us the differing bits. Count the `1`

bits to find the Hamming Distance for that pair of elements. Sum over all pairs to get the total Hamming Distance.

**NOTE:** The `__builtin_popcount()`

method is an internal built-in function available in **C** (and hence by extension **C++**) which gives the count of `1`

bits for an `int`

type argument. Being a low level built-in method, it is understandably faster than running a hand rolled loop.\nAs an alternative, you can iterate over all the bits of the number and count the `1`

bits. Take a look at the code for Approach #2 for hints on how to achieve that.

**Complexity Analysis**

- \n
- \n
Time complexity: .

\n- \n
- There are exactly unique pairs of elements for an element array. \n
- Each of these pairs, when
`xor`

ed, result in a resultant number which is bits long. Here is the largest value any of the elements can ever take. \n - We iterate over these many bits to count the number of
`1`

bits. In our case, since all elements are , the value is a small constant. Hence counting the`1`

bits takes place in nearly constant (i.e. ) time. \n

\n - \n
Space complexity: constant space.

\n \n

\n

**Intuition**

Looping over all possible combinations of two element pairs, increases quadratically over the size of the input. If we could instead loop over the bits of the elements (which is constant), we could shave off an input dimension from our runtime complexity.

\n**Algorithm**

Say for any particular bit position, count the number of elements with this bit **ON** (i.e. this particular bit is `1`

). Let this count be . Hence the number of elements with this bit **OFF** (i.e. `0`

) is (in an element array).

Certainly unique pairs of elements exists where one element has this particular bit **ON** while the other element has this **OFF** (i.e. this particular bit differs for the two elements of this pair).

\n\nWe can argue that every such pair contributes one unit to the Hamming Distance for this particular bit.

\n

We know that the count of such unique pairs is for this particular bit. Hence Hamming Distance for this particular bit is .

\nFor each of the bits that we can check, we can calculate a Hamming Distance pertaining to that bit. Taking sum over the Hamming Distances of all these bits, we get the total Hamming Distance.

\n\n**NOTE:** You *can switch the order of the loops* while counting `1`

bits without affecting complexity. However it might give some performance changes due to locality of reference and the resultant cache hits/misses.

**Complexity Analysis**

- \n
- \n
Time complexity: . Runtime performance is limited by the double loop where we are counting elements for particular bits. In our case, since all elements are , the value is a small constant. Thus the inner loop runs in nearly constant time.

\n \n - \n
Space complexity: extra space.

\n- \n
- For each of the bits, we need to maintain a count which is later used to calculate the Hamming Distance for that bit. Since is a small constant in our case, that takes nearly constant extra space. \n
- Another thing to notice, is that if we
**switch the order of the double loop,**we can do away with storing the count and calculate the Hamming Distance for that bit then and there. That results in only constant extra space being used. \n

\n

\n

This question is a perfect example of how vectorized operations can result in small, elegant and good performance code. Take a look at this slick **Python** solution to this problem (by @StefanPochmann):

The `*`

operator turns the list of `32`

-bit wide binary strings returned by `map`

into individual arguments to the `zip`

method. The `zip`

method vectorizes the string arguments to create a list of vectors (each of which is a vector `b`

of particular bits from every element in the input array; There are `32`

such vectors of size `len(nums)`

each, in this list ). Finally we use the same technique as Approach #2 to calculate the total Hamming Distance.

\n

Analysis written by @babhishek21.

\n