Requests
The app framework issues requests for captured results to the camera subsystem. One request corresponds to one set of results. A request encapsulates all configuration information about the capturing and processing of those results. This includes things such as resolution and pixel format; manual sensor, lens, and flash control; 3A operating modes; RAW to YUV processing control; and statistics generation. This allows for much more control over the results' output and processing. Multiple requests can be in flight at once, and submitting requests is non-blocking. And the requests are always processed in the order they are received.
HAL and camera subsystem
The camera subsystem includes the implementations for components in the camera pipeline such as the 3A algorithm and processing controls. The camera HAL provides interfaces for you to implement your versions of these components. To maintain cross-platform compatibility between multiple device manufacturers and Image Signal Processor (ISP, or camera sensor) vendors, the camera pipeline model is virtual and does not directly correspond to any real ISP. However, it is similar enough to real processing pipelines so that you can map it to your hardware efficiently. In addition, it is abstract enough to allow for multiple different algorithms and orders of operation without compromising either quality, efficiency, or cross-device compatibility.
The camera pipeline also supports triggers that the app framework can initiate to turn on things such as auto-focus. It also sends notifications back to the app framework, notifying apps of events such as an auto-focus lock or errors.
Please note, some image processing blocks shown in the diagram above are not well-defined in the initial release. The camera pipeline makes the following assumptions:
- RAW Bayer output undergoes no processing inside the ISP.
- Statistics are generated based off the raw sensor data.
- The various processing blocks that convert raw sensor data to YUV are in an arbitrary order.
- While multiple scale and crop units are shown, all scaler units share the output region controls (digital zoom). However, each unit may have a different output resolution and pixel format.
Summary of API use
This is a brief summary of the steps for using the Android camera API. See the
Startup and expected operation sequence section for a detailed breakdown of
these steps, including API calls.
- Listen for and enumerate camera devices.
- Open device and connect listeners.
- Configure outputs for target use case (such as still capture, recording, etc.).
- Create request(s) for target use case.
- Capture/repeat requests and bursts.
- Receive result metadata and image data.
- When switching use cases, return to step 3.
HAL operation summary
- Asynchronous requests for captures come from the framework.
- HAL device must process requests in order. And for each request, produce output result metadata, and one or more output image buffers.
- First-in, first-out for requests and results, and for streams referenced by subsequent requests.
- Timestamps must be identical for all outputs from a given request, so that the framework can match them together if needed.
- All capture configuration and state (except for the 3A routines) is encapsulated in the requests and results.
Startup and expected operation sequence
This section contains a detailed explanation of the steps expected when using the camera API. Please see platform/hardware/interfaces/camera/ for HIDL interface definitions.
Enumerate, open camera devices, and create an active session
- After initialization, the framework starts listening for any present
camera providers that implement the
ICameraProvider
interface. If such provider or providers are present, the framework will try to establish a connection. - The framework enumerates the camera devices via
ICameraProvider::getCameraIdList()
. - The framework instantiates a new
ICameraDevice
by calling the respectiveICameraProvider::getCameraDeviceInterface_VX_X()
. - The framework calls
ICameraDevice::open()
to create a new active capture session ICameraDeviceSession.
Use an active camera session
- The framework calls
ICameraDeviceSession::configureStreams()
with a list of input/output streams to the HAL device. - The framework requests default settings for some use cases with
calls to
ICameraDeviceSession::constructDefaultRequestSettings()
. This may occur at any time after theICameraDeviceSession
is created byICameraDevice::open
. - The framework constructs and sends the first capture request to the HAL with
settings based on one of the sets of default settings, and with at least one
output stream that has been registered earlier by the framework. This is sent
to the HAL with
ICameraDeviceSession::processCaptureRequest()
. The HAL must block the return of this call until it is ready for the next request to be sent. - The framework continues to submit requests and calls
ICameraDeviceSession::constructDefaultRequestSettings()
to get default settings buffers for other use cases as necessary. - When the capture of a request begins (sensor starts exposing for the
capture), the HAL calls
ICameraDeviceCallback::notify()
with the SHUTTER message, including the frame number and the timestamp for start of exposure. This notify callback does not have to happen before the firstprocessCaptureResult()
call for a request, but no results are delivered to an app for a capture until afternotify()
for that capture is called. - After some pipeline delay, the HAL begins to return completed captures to
the framework with
ICameraDeviceCallback::processCaptureResult()
. These are returned in the same order as the requests were submitted. Multiple requests can be in flight at once, depending on the pipeline depth of the camera HAL device.
After some time, one of the following will occur:
- The framework may stop submitting new requests, wait for
the existing captures to complete (all buffers filled, all results
returned), and then call
ICameraDeviceSession::configureStreams()
again. This resets the camera hardware and pipeline for a new set of input/output streams. Some streams may be reused from the previous configuration. The framework then continues from the first capture request to the HAL, if at least one registered output stream remains. (Otherwise,ICameraDeviceSession::configureStreams()
is required first.) - The framework may call
ICameraDeviceSession::close()
to end the camera session. This may be called at any time when no other calls from the framework are active, although the call may block until all in-flight captures have completed (all results returned, all buffers filled). After theclose()
call returns, no more calls toICameraDeviceCallback
are allowed from the HAL. Once theclose()
call is underway, the framework may not call any other HAL device functions. - In case of an error or other asynchronous event, the HAL must call
ICameraDeviceCallback::notify()
with the appropriate error/event message. After returning from a fatal device-wide error notification, the HAL should act as ifclose()
had been called on it. However, the HAL must either cancel or complete all outstanding captures before callingnotify()
, so that oncenotify()
is called with a fatal error, the framework will not receive further callbacks from the device. Methods besidesclose()
should return -ENODEV or NULL after thenotify()
method returns from a fatal error message.
Hardware levels
Camera devices can implement several hardware levels depending on their capabilities. For more information, see supported hardware level.
Interaction between the app capture request, 3A control, and the processing pipeline
Depending on the settings in the 3A control block, the camera pipeline ignores some of the parameters in the app's capture request and uses the values provided by the 3A control routines instead. For example, when auto-exposure is active, the exposure time, frame duration, and sensitivity parameters of the sensor are controlled by the platform 3A algorithm, and any app-specified values are ignored. The values chosen for the frame by the 3A routines must be reported in the output metadata. The following table describes the different modes of the 3A control block and the properties that are controlled by these modes. See the platform/system/media/camera/docs/docs.html file for definitions of these properties.
Parameter | State | Properties controlled |
---|---|---|
android.control.aeMode | OFF | None |
ON | android.sensor.exposureTime android.sensor.frameDuration android.sensor.sensitivity android.lens.aperture (if supported) android.lens.filterDensity (if supported) | |
ON_AUTO_FLASH | Everything is ON, plus android.flash.firingPower, android.flash.firingTime, and android.flash.mode | |
ON_ALWAYS_FLASH | Same as ON_AUTO_FLASH | |
ON_AUTO_FLASH_RED_EYE | Same as ON_AUTO_FLASH | |
android.control.awbMode | OFF | None |
WHITE_BALANCE_* | android.colorCorrection.transform. Platform-specific adjustments if android.colorCorrection.mode is FAST or HIGH_QUALITY. | |
android.control.afMode | OFF | None |
FOCUS_MODE_* | android.lens.focusDistance | |
android.control.videoStabilization | OFF | None |
ON | Can adjust android.scaler.cropRegion to implement video stabilization | |
android.control.mode | OFF | AE, AWB, and AF are disabled |
AUTO | Individual AE, AWB, and AF settings are used | |
SCENE_MODE_* | Can override all parameters listed above. Individual 3A controls are disabled. |
The controls in the Image Processing block in Figure 2 all operate on a similar principle, and generally each block has three modes:
- OFF: This processing block is disabled. The demosaic, color correction, and tone curve adjustment blocks cannot be disabled.
- FAST: In this mode, the processing block may not slow down the output frame rate compared to OFF mode, but should otherwise produce the best-quality output it can given that restriction. Typically, this would be used for preview or video recording modes, or burst capture for still images. On some devices, this may be equivalent to OFF mode (no processing can be done without slowing down the frame rate), and on some devices, this may be equivalent to HIGH_QUALITY mode (best quality still does not slow down frame rate).
- HIGH_QUALITY: In this mode, the processing block should produce the best quality result possible, slowing down the output frame rate as needed. Typically, this would be used for high-quality still capture. Some blocks include a manual control which can be optionally selected instead of FAST or HIGH_QUALITY. For example, the color correction block supports a color transform matrix, while the tone curve adjustment supports an arbitrary global tone mapping curve.
The maximum frame rate that can be supported by a camera subsystem is a function of many factors:
- Requested resolutions of output image streams
- Availability of binning/skipping modes on the imager
- The bandwidth of the imager interface
- The bandwidth of the various ISP processing blocks
Since these factors can vary greatly between different ISPs and sensors, the camera HAL interface tries to abstract the bandwidth restrictions into as simple model as possible. The model presented has the following characteristics:
- The image sensor is always configured to output the smallest resolution possible given the app's requested output stream sizes. The smallest resolution is defined as being at least as large as the largest requested output stream size.
- Since any request may use any or all the currently configured output streams, the sensor and ISP must be configured to support scaling a single capture to all the streams at the same time.
- JPEG streams act like processed YUV streams for requests for which they are not included; in requests in which they are directly referenced, they act as JPEG streams.
- The JPEG processor can run concurrently to the rest of the camera pipeline but cannot process more than one capture at a time.