For example,
struct Foo
{
Foo(uint8_t b0, uint8_t b1, uint16_t b23)
{
// some code here
}
uint32_t m_n;
};
I can write something like this:
auto dest = reinterpret_cast<uint8_t*>(&m_n);
memcpy(dest, &b0, sizeof(b0));
memcpy(dest + sizeof(b0), &b1, sizeof(b1));
memcpy(dest + sizeof(b0) + sizeof(b1), &b23, sizeof(b23));
But it's very ugly. And what to do when there are 15 such variables (do not ask why)
I'm suspecting you need this kind of function:
template<typename T>
std::enable_if_t<std::is_integral_v<T>, std::array<uint8_t, sizeof(T)>>
littleEndianBytes(T value)
{
static_assert(sizeof(uint8_t) == 1);
using result_type = std::array<uint8_t, sizeof(T)>;
result_type result;
for(auto& x : result) {
x = value & 0xFF;
value >>= 8;
}
return result;
}
https://wandbox.org/permlink/ooGuIzZaw8tdffaT
In the particular case you've shown, you could move the given arguments into the target using bit-shifting (as suggested in the comments) and logical ORing, which would give code like this:
m_n = (b23 << 16) | (b1 << 8) | b0;
But this is very specific to the case you have given. If your other variables have different types and/or you want to copy things differently, you would have to adapt the code to suit each purpose.
Another way (using the same example), but which is more easily adaptable to different target types, would be something like this:
uint8_t bytes[4] = { b0, b1, uint8_t(b23 & 0xFF), uint8_t(b23 >> 8) };
memcpy(&m_n, bytes, 4);
where you first initialize a byte array to the arguments given (could easily be increased to 16 bytes) and then use memcpy
to move the byte array into the target.
This latter approach could be further 'optimized' by making bytes
a member of Foo
and setting up its values in an initializer list:
struct Foo
{
Foo(uint8_t b0, uint8_t b1, uint16_t b23) : bytes{ b0, b1, uint8_t(b23 & 0xFF), uint8_t(b23 >> 8) }
{
memcpy(&m_n, bytes, 4);
}
uint8_t bytes[4];
uint32_t m_n;
};
Feel free to ask for further clarification and/or explanation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With