This trick is an oldie, but still worth writing about I think. The problem is that when a view’s edges are not straight (e.g. the view has been rotated), the edges are not antialiased by default and appear jagged.
Non-antialiased view on left, anti-aliased view on right
Detail of jagged non-antialiased edge
One Solution
Antialiasing is the process whereby a view’s edges are blended with the colors of the layer below it. Antialiasing for view edges can be enabled systemwide by setting the UIViewEdgeAntialiasing flag in your app’s info.plist, but as the documentation warns, this can have a negative impact on performance (because it requires Core Animation to sample pixels from the render buffer beneath your layer in order to calculate the blending).
An Alternate Solution
If the view in question is static content (or can be rendered temporarily as static content during animation), then there is a more efficient alternative. If you render the view as a UIImageView with a 1 point transparent boundary on all sides, then UIImageView will handle it for you (Core Animation will not have to sample the render buffer beneath your view layer).
Detail of smooth antialiased edge
How It Works
UIImageView has been highly optimized by Apple to work with the GPU, and one of the things it does is interpolate pixels within the image when the image is rotated or scaled. Examine the UIImageView below- the outer edge is jagged, but the inner boundaries between the yellow and purple are properly interpolated by UIImageView (compare it to the UIView on the left in the first image near the top of this article).
UIImageView with jagged outer edges but smooth inner edges
Essentially what happens when you add the 1 point transparent margin around the outer edges of the UIImageView is that the visible border becomes internal pixels and UIImageView interpolates them with the neighboring transparent pixels just as it does for the rest of the image, thus eliminating the need to anti-aliase the edges with the layer below it. The resulting image (now with partially transparent edge pixels) can now be rendered directly over the layer beneath it.
UIImageView with transparent edge- now all visible edges are smooth inner edges
How to render UIView as UIImage
You just create an image context, draw your view (or subset thereof) into the context, and get an image back. This method lets you specify the exact frame (in the view’s coordinates) you want rendered. Pass in view.bounds to render the entire view or pass a smaller rect to render just a subset (useful for splitting up views for animations).
+ (UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame
{
// Create a new context of the desired size to render the image
UIGraphicsBeginImageContextWithOptions(frame.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Translate it, to the desired position
CGContextTranslateCTM(context, -frame.origin.x, -frame.origin.y);
// Render the view as image
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
// Fetch the image
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
// Cleanup
UIGraphicsEndImageContext();
return renderedImage;
}
How to add a transparent edge to UIImage
Again you just create an image context (this time slightly larger than your image), draw the original image into it (offset by a certain amount), then get the new larger image back.
+ (UIImage *)renderImageForAntialiasing:(UIImage *)image withInsets:(UIEdgeInsets)insets
{
CGSize imageSizeWithBorder = CGSizeMake([image size].width + insets.left + insets.right, [image size].height + insets.top + insets.bottom);
// Create a new context of the desired size to render the image
UIGraphicsBeginImageContextWithOptions(imageSizeWithBorder, NO, 0);
// The image starts off filled with clear pixels, so we don't need to explicitly fill them here
[image drawInRect:(CGRect){{insets.left, insets.top}, [image size]}];
// Fetch the image
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return renderedImage;
}
Putting it all together
But of course why create 2 image contexts and render twice when we can do it in a single step?
+ (UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame transparentInsets:(UIEdgeInsets)insets
{
CGSize imageSizeWithBorder = CGSizeMake(frame.size.width + insets.left + insets.right, frame.size.height + insets.top + insets.bottom);
// Create a new context of the desired size to render the image
UIGraphicsBeginImageContextWithOptions(imageSizeWithBorder, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Clip the context to the portion of the view we will draw
CGContextClipToRect(context, (CGRect){{insets.left, insets.top}, frame.size});
// Translate it, to the desired position
CGContextTranslateCTM(context, -frame.origin.x + insets.left, -frame.origin.y + insets.top);
// Render the view as image
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
// Fetch the image
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
// Cleanup
UIGraphicsEndImageContext();
return renderedImage;
}
Some things to remember
- Be sure to expand the size of your image view’s bounds to account for the transparent edges. e.g. if the original image is 200 x 200 then resize to 202 x 202. Otherwise (depending on its content mode) the image might shrink to fit its new size in its old bounds.
- This solution doesn’t work particularly well if the image is being scaled down. You need to have 1 pixel of transparent edge at the scaled size, so if you are scaling by 0.25 you would need 4 points of transparent margin at the full image size. But even then the results are often unsatisfactory. Rasterization fixes it, but requires an additional expensive off-screen rendering pass.
Sample code
I created a simple sample project to demonstrate all this. It has a regular UIView, a UIImageView copy with transparent edges, and a play/pause button to slowly rotate both views. It’s on GitHub.
Note: the detail images were taken from the excellent xScope app by the Iconfactory.