Are the following two examples equivalent?
Example 1:
let x = String::new();
let y = &x[..];
Example 2:
let x = String::new();
let y = &*x;
Is one more efficient than the other or are they basically the same?
In the case of String
and Vec
, they do the same thing. In general, however, they aren't quite equivalent.
First, you have to understand Deref
. This trait is implemented in cases where a type is logically "wrapping" some lower-level, simpler value. For example, all of the "smart pointer" types (Box
, Rc
, Arc
) implement Deref
to give you access to their contents.
It is also implemented for String
and Vec
: String
"derefs" to the simpler str
, Vec<T>
derefs to the simpler [T]
.
Writing *s
is just manually invoking Deref::deref
to turn s
into its "simpler form". It is almost always written &*s
, however: although the Deref::deref
signature says it returns a borrowed pointer (&Target
), the compiler inserts a second automatic deref. This is so that, for example, { let x = Box::new(42i32); *x }
results in an i32
rather than a &i32
.
So &*s
is really just shorthand for Deref::deref(&s)
.
s[..]
is syntactic sugar for s.index(RangeFull)
, implemented by the Index
trait. This means to slice the "whole range" of the thing being indexed; for both String
and Vec
, this gives you a slice of the entire contents. Again, the result is technically a borrowed pointer, but Rust auto-derefs this one as well, so it's also almost always written &s[..]
.
So what's the difference? Hold that thought; let's talk about Deref
chaining.
To take a specific example, because you can view a String
as a str
, it would be really helpful to have all the methods available on str
s automatically available on String
s as well. Rather than inheritance, Rust does this by Deref
chaining.
The way it works is that when you ask for a particular method on a value, Rust first looks at the methods defined for that specific type. Let's say it doesn't find the method you asked for; before giving up, Rust will check for a Deref
implementation. If it finds one, it invokes it and then tries again.
This means that when you call s.chars()
where s
is a String
, what's actually happening is that you're calling s.deref().chars()
, because String
doesn't have a method called chars
, but str
does (scroll up to see that String
only gets this method because it implements Deref<Target=str>
).
Getting back to the original question, the difference between &*s
and &s[..]
is in what happens when s
is not just String
or Vec<T>
. Let's take a few examples:
s: String
; &*s: &str
, &s[..]: &str
.s: &String
: &*s: &String
, &s[..]: &str
.s: Box<String>
: &*s: &String
, &s[..]: &str
.s: Box<Rc<&String>>
: &*s: &Rc<&String>
, &s[..]: &str
.&*s
only ever peels away one layer of indirection. &s[..]
peels away all of them. This is because none of Box
, Rc
, &
, etc. implement the Index
trait, so Deref
chaining causes the call to s.index(RangeFull)
to chain through all those intermediate layers.
Which one should you use? Whichever you want. Use &*s
(or &**s
, or &***s
) if you want to control exactly how many layers of indirection you want to strip off. Use &s[..]
if you want to strip them all off and just get at the innermost representation of the value.
Or, you can do what I do and use &*s
because it reads left-to-right, whereas &s[..]
reads left-to-right-to-left-again and that annoys me. :)
Deref
coercions.DerefMut
and IndexMut
which do all of the above, but for &mut
instead of &
.If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With