A potential pitfall of CGRectIntegral

This morning while I was checking an app for misaligned elements, I happened upon a misaligned button. (If you’re not using either the iOS Simulator or Instruments to check your app for misaligned images, you should be, but that’s a post for another day.)

Checking the code it was obvious to me where the problem was.

backButton.frame = CGRectMake(5, (navigationBar.bounds.size.height
    - imageBack.size.height)/2, imageBack.size.width,
    imageBack.size.height);

Centering code is especially prone to pixel misalignment. In this case imageBack has a size of (50, 29) while the navigationBar has a height of 44 points. The code above generates a rect with origin = (5, 7.5) and size = (50, 29). So the image ends up vertically misaligned, which in turn makes the child text label inside also misaligned, and hence they show up painted in magenta when the Color Misaligned Images option is checked in the iOS Simulator Debug menu.

This looks like a job for CGRectIntegral, right? But when I change the code to this:

backButton.frame  = CGRectIntegral(CGRectMake(5, 
    (navigationBar.bounds.size.height - imageBack.size.height)/2,
    imageBack.size.width, imageBack.size.height));

I end up with this:

The button is no longer misaligned, but it is now being stretched (hence the yellow wash). Debugging shows that CGRectIntegral has converted the input rect of (5, 7.5) x (50, 29) into (5, 7) x (50, 30). So now the image is being stretched vertically by 1 point. That might be fine for UILabel but not for an image.

The other issue with using CGRectIntegral is that the original rect is actually fine for retina devices because they have 2 pixels per point, so a value of 7.5 actually falls on a pixel boundary, and is the optimal centering for this image. If we adjusted it to origin.y = 7 (without stretching) then it would be 2 pixels closer to the top than to the bottom on a retina device.

I’ve written some helper functions to correctly pixel align rectangles (not point align) for both retina and non-retina screens, and posted them in this gist.

Under non-retina it would convert the rectangle to (5, 7) x (50,29) to pixel align it without stretching, while under retina it would leave the rectangle unmodified at (5, 7.5) x (50, 29).

This finally clears the magenta (alignment) and yellow (stretch) washes from the button:

Addendum

According to the Apple Documentation for CGRectIntegral:

A rectangle with the smallest integer values for its origin and size that contains the source rectangle. That is, given a rectangle with fractional origin or size values, CGRectIntegral rounds the rectangle’s origin downward and its size upward to the nearest whole integers, such that the result contains the original rectangle.

The fractional origin of (5, 7.5) is rounded downward to (5, 7), but I initially thought the size would be left unmodified (not rounded up) because it already comprises 2 whole integers. But that wouldn’t contain the original rectangle, whose lower right corner is positioned at (55, 36.5). In order to contain the original rectangle, the height has to be increased by 1 point from 29 to 30.

On the importance of setting shadowPath

It’s super easy to add drop shadows to any view in iOS. All you need to do is

  1. add QuartzCore framework to your project (if not there already)
  2. import QuartzCore into your implementation file
  3. add a line such as [myView.layer setShadowOpacity:0.5]

and voilà, your view now has a drop shadow.


However, the easy way is rarely the best way in terms of performance.  If you have to animate this view (and especially if it’s part of a UITableViewCell) you will probably notice stutters in the animation.  This is because calculating the drop shadow for your view requires Core Animation to do an offscreen rendering pass to determine the exact shape of your view in order to figure out how to render its drop shadow.  (Remember, your view could be any complex shape, possibly even with holes in it.)

To convince yourself of this, turn on the Color Offscreen-Rendered option in the Simulator’s Debug menu.


Alternately, target a physical device, launch Instruments (⌘I), choose the Core Animation template, select the Core Animation instrument, and check the Color Offscreen-Rendered Yellow option.


Then in the Simulator (or on your device) you will see something like this:


Which indicates that something (in our case the drop shadow) is forcing an expensive offscreen rendering pass.

The quick fix

Fortunately, fixing the drop shadow performance is typically almost as easy as adding a drop shadow.  All you need to do is provide Core Animation with some information about the shape of your view to help it along.  Calling setShadowPath: on your view’s layer does exactly that:

[myView.layer setShadowPath:[[UIBezierPath 
    bezierPathWithRect:myView.bounds] CGPath]];

(Note: your code will vary depending on the actual shape of your view.  UIBezierPath has many convenience methods, including bezierPathWithRoundedRect:cornerRadius: in case you’ve rounded the corners of your view.)

Now run it again and confirm that the yellow wash for offscreen-rendered content is gone.

The catch

You will need to update the layer’s shadowPath each time the bounds of your view change.  And if you’re animating a change to bounds, then you will also need to animate the change to the layer’s shadowPath to match.  This will need to be a CAAnimation because UIView cannot animate shadowPath (which is a property on CALayer).  Fortunately, it is straight-forward to animate from one CGPath to another (from the old to new shadowPath) via CAKeyframeAnimation.

Efficient Edge Antialiasing

This trick is an oldie, but still worth writing about I think.  The problem is that when a view’s edges are not straight (e.g. the view has been rotated), the edges are not antialiased by default and appear jagged.

Non-antialiased view on left, anti-aliased view on right

Detail of jagged non-antialiased edge

 

One Solution

Antialiasing is the process whereby a view’s edges are blended with the colors of the layer below it.  Antialiasing for view edges can be enabled systemwide by setting the UIViewEdgeAntialiasing flag in your app’s info.plist, but as the documentation warns, this can have a negative impact on performance (because it requires Core Animation to sample pixels from the render buffer beneath your layer in order to calculate the blending).

An Alternate Solution

If the view in question is static content (or can be rendered temporarily as static content during animation), then there is a more efficient alternative.  If you render the view as a UIImageView with a 1 point transparent boundary on all sides, then UIImageView will handle it for you (Core Animation will not have to sample the render buffer beneath your view layer).

Detail of smooth antialiased edge

 

How It Works

UIImageView has been highly optimized by Apple to work with the GPU, and one of the things it does is interpolate pixels within the image when the image is rotated or scaled.  Examine the UIImageView below- the outer edge is jagged, but the inner boundaries between the yellow and purple are properly interpolated by UIImageView (compare it to the UIView on the left in the first image near the top of this article).

UIImageView with jagged outer edges but smooth inner edges

Essentially what happens when you add the 1 point transparent margin around the outer edges of the UIImageView is that the visible border becomes internal pixels and UIImageView interpolates them with the neighboring transparent pixels just as it does for the rest of the image, thus eliminating the need to anti-aliase the edges with the layer below it.  The resulting image (now with partially transparent edge pixels) can now be rendered directly over the layer beneath it.

UIImageView with transparent edge- now all visible edges are smooth inner edges

 

How to render UIView as UIImage

You just create an image context, draw your view (or subset thereof) into the context, and get an image back.  This method lets you specify the exact frame (in the view’s coordinates) you want rendered.  Pass in view.bounds to render the entire view or pass a smaller rect to render just a subset (useful for splitting up views for animations).

+ (UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame
{
    // Create a new context of the desired size to render the image
    UIGraphicsBeginImageContextWithOptions(frame.size, YES, 0);
    CGContextRef context = UIGraphicsGetCurrentContext();

    // Translate it, to the desired position
    CGContextTranslateCTM(context, -frame.origin.x, -frame.origin.y);
    // Render the view as image
    [view.layer renderInContext:UIGraphicsGetCurrentContext()];
    // Fetch the image
    UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
    // Cleanup
    UIGraphicsEndImageContext();
    return renderedImage;
}

How to add a transparent edge to UIImage

Again you just create an image context (this time slightly larger than your image), draw the original image into it (offset by a certain amount), then get the new larger image back.

+ (UIImage *)renderImageForAntialiasing:(UIImage *)image withInsets:(UIEdgeInsets)insets
{
    CGSize imageSizeWithBorder = CGSizeMake([image size].width + insets.left + insets.right, [image size].height + insets.top + insets.bottom);

    // Create a new context of the desired size to render the image
    UIGraphicsBeginImageContextWithOptions(imageSizeWithBorder, NO, 0);

    // The image starts off filled with clear pixels, so we don't need to explicitly fill them here	
    [image drawInRect:(CGRect){{insets.left, insets.top}, [image size]}];

    // Fetch the image   
    UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    return renderedImage;
}

Putting it all together

But of course why create 2 image contexts and render twice when we can do it in a single step?

+ (UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame transparentInsets:(UIEdgeInsets)insets
{
    CGSize imageSizeWithBorder = CGSizeMake(frame.size.width + insets.left + insets.right, frame.size.height + insets.top + insets.bottom);
    // Create a new context of the desired size to render the image
    UIGraphicsBeginImageContextWithOptions(imageSizeWithBorder, NO, 0);
    CGContextRef context = UIGraphicsGetCurrentContext();

    // Clip the context to the portion of the view we will draw
    CGContextClipToRect(context, (CGRect){{insets.left, insets.top}, frame.size});
    // Translate it, to the desired position
    CGContextTranslateCTM(context, -frame.origin.x + insets.left, -frame.origin.y + insets.top);

    // Render the view as image
    [view.layer renderInContext:UIGraphicsGetCurrentContext()];

    // Fetch the image   
    UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();

    // Cleanup
    UIGraphicsEndImageContext();

    return renderedImage;
}

Some things to remember

  1. Be sure to expand the size of your image view’s bounds to account for the transparent edges.  e.g. if the original image is 200 x 200 then resize to 202 x 202.  Otherwise (depending on its content mode) the image might shrink to fit its new size in its old bounds.
  2. This solution doesn’t work particularly well if the image is being scaled down.  You need to have 1 pixel of transparent edge at the scaled size, so if you are scaling by 0.25 you would need 4 points of transparent margin at the full image size.  But even then the results are often unsatisfactory.  Rasterization fixes it, but requires an additional expensive off-screen rendering pass.

Sample code

I created a simple sample project to demonstrate all this.  It has a regular UIView, a UIImageView copy with transparent edges, and a play/pause button to slowly rotate both views.  It’s on GitHub.

Note: the detail images were taken from the excellent xScope app by the Iconfactory.