This answer cites N4082, which shows that the upcoming changes to std::shared_ptr
will allow both T[]
and T[N]
variants:
Unlike the
unique_ptr
partial specialization for arrays, bothshared_ptr<T[]>
andshared_ptr<T[N]>
will be valid and both will result indelete[]
being called on the managed array of objects.template<class Y> explicit shared_ptr(Y* p);
Requires:
Y
shall be a complete type. The expressiondelete[] p
, whenT
is an array type, ordelete p
, whenT
is not an array type, shall be well-formed, shall have well defined behavior, and shall not throw exceptions. WhenT
isU[N]
,Y(*)[N]
shall be convertible toT*
; whenT
isU[]
,Y(*)[]
shall be convertible toT*
; otherwise,Y*
shall be convertible toT*
.
Unless I'm mistaken, a Y(*)[N]
could only be formed by taking the address of an array, which clearly can't be owned or deleted by a shared_ptr
. I also don't see any indication that N
is used in any way to enforce the size of the managed object.
What is the motivation behind allowing the T[N]
syntax? Does it yield any actual benefit, and if so, how is it used?
You can get a pointer to a nested object sharing ownership with a std::shared_ptr
to the containing object. If this nested object happens to be an array and you want to access it as an array type, you actually need to use T[N]
with suitable T
and N
:
#include <functional>
#include <iostream>
#include <iterator>
#include <memory>
#include <queue>
#include <utility>
#include <vector>
using queue = std::queue<std::function<void()>>;
template <typename T>
struct is_range {
template <typename R> static std::false_type test(R*, ...);
template <typename R> static std::true_type test(R* r, decltype(std::begin(*r))*);
static constexpr bool value = decltype(test(std::declval<T*>(), nullptr))();
};
template <typename T>
std::enable_if_t<!is_range<T>::value> process(T const& value) {
std::cout << "value=" << value << "\n";
}
template <typename T>
std::enable_if_t<is_range<T>::value> process(T const &range) {
std::cout << "range=[";
auto it(std::begin(range)), e(std::end(range));
if (it != e) {
std::cout << *it;
while (++it != e) {
std::cout << ", " << *it;
}
}
std::cout << "]\n";
}
template <typename P, typename T>
std::function<void()> make_fun(P const& p, T& value) {
return [ptr = std::shared_ptr<T>(p, &value)]{ process(*ptr); };
// here ----^
}
template <typename T, typename... M>
void enqueue(queue& q, std::shared_ptr<T> const& ptr, M... members) {
(void)std::initializer_list<bool>{
(q.push(make_fun(ptr, (*ptr).*members)), true)...
};
}
struct foo {
template <typename... T>
foo(int v, T... a): value(v), array{ a... } {}
int value;
int array[3];
std::vector<int> vector;
};
int main() {
queue q;
auto ptr = std::make_shared<foo>(1, 2, 3, 4);
enqueue(q, ptr, &foo::value, &foo::array, &foo::vector);
while (!q.empty()) {
q.front()();
q.pop();
}
}
In the above code q
is just a simple std::queue<std::function<void()>>
but I hope you can imagine that it could be a thread-pool off-loading the processing to another thread. The actually scheduled processing is also trivial but, again, I hope you can imagine that it is actually some substantial amount of work.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With