In .NET 4.5 this cipher worked perfectly on 32 and 64 bit architecture. Switching the project to .NET 4.6 breaks this cipher completely in 64-bit, and in 32-bit there's an odd patch for the issue.
In my method "DecodeSkill", SkillLevel is the only part that breaks on .NET 4.6. The variables used here are read from a network stream and are encoded.
DecodeSkill (Always returns the proper decoded value for SkillLevel)
private void DecodeSkill()
{
SkillId = (ushort) (ExchangeShortBits((SkillId ^ ObjectId ^ 0x915d), 13) + 0x14be);
SkillLevel = ((ushort) ((byte)SkillLevel ^ 0x21));
TargetObjectId = (ExchangeLongBits(TargetObjectId, 13) ^ ObjectId ^ 0x5f2d2463) + 0x8b90b51a;
PositionX = (ushort) (ExchangeShortBits((PositionX ^ ObjectId ^ 0x2ed6), 15) + 0xdd12);
PositionY = (ushort) (ExchangeShortBits((PositionY ^ ObjectId ^ 0xb99b), 11) + 0x76de);
}
ExchangeShortBits
private static uint ExchangeShortBits(uint data, int bits)
{
data &= 0xffff;
return (data >> bits | data << (16 - bits)) & 65535;
}
DecodeSkill (Patched for .NET 4.6 32-bit, notice "var patch = SkillLevel")
private void DecodeSkill()
{
SkillId = (ushort) (ExchangeShortBits((SkillId ^ ObjectId ^ 0x915d), 13) + 0x14be);
var patch = SkillLevel = ((ushort) ((byte)SkillLevel ^ 0x21));
TargetObjectId = (ExchangeLongBits(TargetObjectId, 13) ^ ObjectId ^ 0x5f2d2463) + 0x8b90b51a;
PositionX = (ushort) (ExchangeShortBits((PositionX ^ ObjectId ^ 0x2ed6), 15) + 0xdd12);
PositionY = (ushort) (ExchangeShortBits((PositionY ^ ObjectId ^ 0xb99b), 11) + 0x76de);
}
Assigning the variable as SkillLevel, in 32-bit only, will cause SkillLevel to always be the correct value. Remove this patch, and the value is always incorrect. In 64-bit, this is always incorrect even with the patch.
I've tried using MethodImplOptions.NoOptimization and MethodImplOptions.NoInlining on the decode method thinking it would make a difference.
Any ideas to what would cause this?
Edit: I was asked to give an example of input, good output, and bad output. This is from an actual usage scenario, values were sent from the client and properly decoded by the server using the "patch" on .NET 4.6.
Input:
ObjectId = 1000001
TargetObjectId = 2778236265
PositionX = 32409
PositionY = 16267
SkillId = 28399
SkillLevel = 8481
Good Output
TargetObjectId = 0
PositionX = 302
PositionY = 278
SkillId = 1115
SkillLevel = 0
Bad Output
TargetObjectId = 0
PositionX = 302
PositionY = 278
SkillId = 1115
SkillLevel = 34545
Edit#2:
I should include this part, definitely an important part to this.
EncodeSkill (Timestamp is Environment.TickCount)
private void EncodeSkill()
{
SkillId = (ushort) (ExchangeShortBits(ObjectId - 0x14be, 3) ^ ObjectId ^ 0x915d);
SkillLevel = (ushort) ((SkillLevel + 0x100*(Timestamp%0x100)) ^ 0x3721);
Arg1 = MathUtils.BitFold32(SkillId, SkillLevel);
TargetObjectId = ExchangeLongBits(((TargetObjectId - 0x8b90b51a) ^ ObjectId ^ 0x5f2d2463u), 19);
PositionX = (ushort) (ExchangeShortBits((uint) PositionX - 0xdd12, 1) ^ ObjectId ^ 0x2ed6);
PositionY = (ushort) (ExchangeShortBits((uint) PositionY - 0x76de, 5) ^ ObjectId ^ 0xb99b);
}
BitFold32
public static int BitFold32(int lower16, int higher16)
{
return (lower16) | (higher16 << 16);
}
ExchangeLongBits
private static uint ExchangeLongBits(uint data, int bits)
{
return data >> bits | data << (32 - bits);
}
Here is the code I've come up with that I think is analogous to your actual scenario:
using System;
using System.Diagnostics;
class Program
{
static void Main(string[] args)
{
var dc = new Decoder();
dc.DecodeSkill();
Debug.Assert(dc.TargetObjectId == 0m && dc.PositionX == 302 && dc.PositionY == 278 && dc.SkillId == 1115 && dc.SkillLevel == 0);
}
}
class Decoder
{
public uint ObjectId = 1000001;
public uint TargetObjectId = 2778236265;
public ushort PositionX = 32409;
public ushort PositionY = 16267;
public ushort SkillId = 28399;
public ushort SkillLevel = 8481;
public void DecodeSkill()
{
SkillId = (ushort)(ExchangeShortBits((SkillId ^ ObjectId ^ 0x915d), 13) + 0x14be);
SkillLevel = ((ushort)((byte)(SkillLevel) ^ 0x21));
TargetObjectId = (ExchangeLongBits(TargetObjectId, 13) ^ ObjectId ^ 0x5f2d2463) + 0x8b90b51a;
PositionX = (ushort)(ExchangeShortBits((PositionX ^ ObjectId ^ 0x2ed6), 15) + 0xdd12);
PositionY = (ushort)(ExchangeShortBits((PositionY ^ ObjectId ^ 0xb99b), 11) + 0x76de);
}
private static uint ExchangeShortBits(uint data, int bits)
{
data &= 0xffff;
return (data >> bits | data << (16 - bits)) & 65535;
}
public static int BitFold32(int lower16, int higher16)
{
return (lower16) | (higher16 << 16);
}
private static uint ExchangeLongBits(uint data, int bits)
{
return data >> bits | data << (32 - bits);
}
}
You're XORing 8481 with 33. That's 8448, which is what I see on my machine. Assuming, SkillLevel
is a ushort
, I think what is going on is that you're expecting the cast to byte
to truncate SkillLevel
so that all that is left is the last 8 bits, but this is not happening, so when you cast back to ushort
the higher order bits are still there.
If you want to reliably truncate all digits after the lower 8, you need to bitmask it like so:
SkillLevel = ((ushort) ((SkillLevel & 255) ^ 0x21));
EDIT:
I have a suspicion that this has something to do with numeric promotions from operators. The ^
operator, when applied to a byte
or an ushort
and an int
, will promote both operands to int
, since implicit conversions exist from both possible types of the first operand to int
. It seems like what is happening is that the explicit conversion from ushort
to byte
, which would cause truncation, is being skipped. Now you just have two int
s, which when XORed, then truncated back to ushort
, keep their higher order bits.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With