30-01-2013, 04:35 PM
Core Image Programming Guide
1Core Image.pdf (Size: 1.49 MB / Downloads: 140)
INTRODUCTION
Core Image is an image processing and analysis technology designed to provide near real-time processing for
still and video images. It operates on image data types from the Core Graphics, Core Video, and Image I/O
frameworks, using either a GPU or CPU rendering path. Core Image hides the details of low-level graphics
processing by providing an easy-to-use application programming interface (API). You don’t need to know the
details of OpenGL or OpenGL ES to leverage the power of the GPU, nor do you need to know anything about
Grand Central Dispatch (GCD) to get the benefit of multicore processing. Core Image handles the details for
you.
Processing Images
ore Image has three classes that support image processing on iOS and OS X:
● CIFilter is a mutable object that represents an effect. A filter object has at least one input parameter
and produces an output image.
● CIImage is an immutable object that represents an image. You can synthesize image data or provide it
from a file or the output of another CIFilter object.
● CIContext is an object through which Core Image draws the results produced by a filter. A Core Image
context can be based on the CPU or the GPU.
The remainder of this chapter provides all the details you need to use Core Image filters and the CIFilter,
CIImage, and CIContext classes on iOS and OS X.
The Built-in Filters
Core Image comes with dozens of built-in filters ready to support image processing in your app. Core Image
Filter Reference lists these filters, their characteristics, their iOS and OS X availability, and shows a sample image
produced by the filter. The list of built-in filters can change, so for that reason, Core Image provides methods
that let you query the system for the available filters (see “Querying the System for Filters” (page 38)).
A filter category specifies the type of effect—blur, distortion, generator, and so forth—or its intended use—still
images, video, nonsquare pixels, and so on. A filter can be a member of more than one category. A filter also
has a display name, which is the name to show to users and a filter name, which is the name you must use
to access the filter programmatically.
Most filters have one or more input parameters that let you control how processing is done. Each input
parameter has an attribute class that specifies its data type, such as NSNumber. An input parameter can
optionally have other attributes, such as its default value, the allowable minimum and maximum values, the
display name for the parameter, and any other attributes that are described in CIFilter Class Reference .
For example, the CIColorMonochrome filter has three input parameters—the image to process, a monochrome
color, and the color intensity. You supply the image and have the option to set a color and its intensity. Most
filters, including the CIColorMonochrome filter, have default values for each nonimage input parameter. Core
Image uses the default values to process your image if you choose not to supply your own values for the input
parameters.