I actually know how to do this, but the way in which I'm doing it leads me to my two real questions. First, I'm changing this frame in a UIButton subclass like this:
- (CGRect)accessibilityFrame {
CGRect rect = [self.superview convertRect:self.frame toView:nil];
rect.size.width *= 5;
return rect;
}
If I omit the first line of this method, then the accessibility frame for this button appears in the absolute upper-left corner of the screen, regardless of its original position. All of the code samples I've found for overriding the accessibility frame use some variant of coordinate conversion, but I'm not at all understanding why. Why can't this property be treated in the same coordinate system as view frames, since this would make modifying the accessibility frame immensely more simple?
My second question relates to how this accessibility frame actually functions with VoiceOver turned on. When I modify the accessibility frame in this manner, the focus rectangle becomes larger as expected. Unfortunately, this effect appears to be purely cosmetic. Even though the focus rectangle is now much larger than the button itself, I can still only apply the focus to the button by tapping within the button bounds proper; if I tap inside the focus rectangle but outside of the button itself, nothing happens. This is a big problem for me because I'm trying to expand the "virtual" size of some controls that are so small that they're very difficult to actually tap unless you can see them. I'm not understanding at all the point of making the focus rectangle only cosmetically bigger, if sight-impaired users can't see it anyway.
Edit: adding this bit does the trick:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
CGPoint newPoint = [self.superview convertPoint:point toView:nil];
return CGRectContainsPoint(self.accessibilityFrame, newPoint);
}
I would still love to know why something like this is even necessary.
Edit 2: the code above only works in special cases (inside multiple nested sub-views). A more general solution is:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
CGPoint newPoint = [self convertPoint:point toView:nil];
return CGRectContainsPoint(self.accessibilityFrame, newPoint);
}
At its simplest, a view's bounds refers to its coordinates relative to its own space (as if the rest of your view hierarchy didn't exist), whereas its frame refers to its coordinates relative to its parent's space.
frame = a view's location and size using the parent view's coordinate system ( important for placing the view in the parent) bounds = a view's location and size using its own coordinate system (important for placing the view's content or subviews within itself)
The bounds rectangle, which describes the view's location and size in its own coordinate system.
The accessibility frame is specified in screen coordinates (see documentation).
The conversion
-[UIView convertRect:toView:]
converts view coordinates to window coordinates if toView is nil.
To be 100% correct, an additional conversion from window to screen coordinates using
-[UIWindow convertPoint:toWindow:]
(again, with a nil toWindow) would be needed.
Now, you probably already know that you need to double-tap within the VoiceOver cursor in order to trigger the button's action. By default, the double-tap hits at the center of the accessibility frame. However, if the center of your modified accessibility frame is not on your button, nothing will happen. Conveniently, the accessibilityActivationPoint
method lets you put that center elsewhere (e.g. right onto your button's actual center). Again, don't forget to convert to screen coordinates.
One thing to consider before you run off making huge accessibility frames: If you have really tiny buttons, they might be difficult to hit for normal users too. Consider making them (or at least the area where they respond to touches) larger. Apple recommends at least 44 x 44 points.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With