Exploring Android Camera API: Building Camera-enabled Apps

The ubiquitous presence of smartphones with high-quality cameras has transformed the way we capture and share moments. As an Android developer, diving into the world of camera-enabled apps can be a fascinating journey. The Android Camera API provides a robust set of features and controls to create powerful camera applications that take full advantage of the device’s capabilities. In this blog, we will explore the Android Camera API, its key components, and how to build camera-enabled apps with captivating functionalities.

Exploring Android Camera API: Building Camera-enabled Apps

1. Understanding the Android Camera API:

The Android Camera API is part of the android.hardware.camera2 package, introduced in Android Lollipop (API level 21). It provides a powerful and flexible framework to interact with the device’s camera hardware and create custom camera functionalities in your Android applications.

1.1. Key Components of the Camera API:

The Camera API consists of several important components, each serving a specific purpose in the camera workflow:

1.1.1. CameraManager:

The CameraManager class allows you to interact with the device’s cameras, query their availability, and open a connection to a specific camera for capturing photos or videos.

1.1.2. CameraDevice:

The CameraDevice represents an open connection to a specific camera, allowing you to set up capture sessions and configure various camera settings.

1.1.3. CameraCharacteristics:

The CameraCharacteristics class provides metadata about the camera, such as supported resolutions, available features, and lens characteristics. It helps you determine the capabilities of the camera and adjust your app’s behavior accordingly.

1.1.4. CameraCaptureSession:

The CameraCaptureSession is responsible for handling the capture requests and image processing. It manages the flow of data between the camera device and the output surfaces.

1.1.5. CaptureRequest:

The CaptureRequest class encapsulates the settings for a single capture request, including capture mode, exposure, focus, and more.

2. Building a Simple Camera App:

Let’s start by building a simple camera app that captures photos using the Android Camera API. Ensure that you have set up your development environment and have the necessary permissions declared in the AndroidManifest.xml file.

2.1. Requesting Camera Permission:

Before accessing the camera, you need to request camera permission from the user. Add the following code in your activity to request permission:

private static final int REQUEST_CAMERA_PERMISSION = 200;

private void requestCameraPermission() {
    if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA)
            != PackageManager.PERMISSION_GRANTED) {
                new String[]{Manifest.permission.CAMERA},
    } else {
        openCamera(); // Proceed to open the camera if permission is granted

public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
    if (requestCode == REQUEST_CAMERA_PERMISSION) {
        if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
        } else {
            Toast.makeText(this, "Camera permission denied", Toast.LENGTH_SHORT).show();

2.2. Opening the Camera:

Now, let’s implement the openCamera() method to open the rear-facing camera for capturing photos:

private CameraDevice mCameraDevice;
private CameraDevice.StateCallback mCameraStateCallback = new CameraDevice.StateCallback() {
    public void onOpened(@NonNull CameraDevice camera) {
        mCameraDevice = camera;
        // Start the camera preview here

    public void onDisconnected(@NonNull CameraDevice camera) {
        mCameraDevice = null;

    public void onError(@NonNull CameraDevice camera, int error) {
        mCameraDevice = null;

private void openCamera() {
    CameraManager cameraManager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
    try {
        String cameraId = cameraManager.getCameraIdList()[0]; // Use the first rear-facing camera
        cameraManager.openCamera(cameraId, mCameraStateCallback, null);
    } catch (CameraAccessException | SecurityException e) {

2.3. Creating the Camera Preview:

To display the camera preview on the screen, you need to set up a SurfaceView or TextureView and use the CameraCaptureSession to handle the preview requests.

private SurfaceView mSurfaceView; // Assume you have already initialized this in the layout XML

private void startCameraPreview() {
    try {
        SurfaceHolder surfaceHolder = mSurfaceView.getHolder();
        surfaceHolder.setFixedSize(PREVIEW_WIDTH, PREVIEW_HEIGHT); // Set desired preview size

        CaptureRequest.Builder previewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);

                new CameraCaptureSession.StateCallback() {
                    public void onConfigured(@NonNull CameraCaptureSession session) {
                        // The camera is ready for preview
                        try {
                            session.setRepeatingRequest(, null, null);
                        } catch (CameraAccessException e) {

                    public void onConfigureFailed(@NonNull CameraCaptureSession session) {
                        Toast.makeText(MainActivity.this, "Failed to start camera preview", Toast.LENGTH_SHORT).show();
                }, null);
    } catch (CameraAccessException e) {

3. Capturing Photos:

Now that we have the camera preview set up, let’s implement the photo capture functionality.

3.1. Capturing Photos:

To capture photos, you need to configure the camera in still capture mode and set the target output for the captured image.

private void takePhoto() {
    try {
        final CaptureRequest.Builder captureBuilder =

        CameraCaptureSession.CaptureCallback captureCallback =
                new CameraCaptureSession.CaptureCallback() {
                    public void onCaptureCompleted(@NonNull CameraCaptureSession session,
                                                   @NonNull CaptureRequest request,
                                                   @NonNull TotalCaptureResult result) {
                        // Photo capture completed
                        // Implement your logic here, e.g., save the photo, display it to the user, etc.

        mCameraCaptureSession.capture(, captureCallback, null);
    } catch (CameraAccessException e) {

3.2. Setting up ImageReader:

To receive and process the captured image, you need to set up an ImageReader.

private ImageReader mImageReader;
private static final int IMAGE_WIDTH = 1280;
private static final int IMAGE_HEIGHT = 720;
private static final int MAX_IMAGES = 1;

private void setupImageReader() {
    mImageReader = ImageReader.newInstance(IMAGE_WIDTH, IMAGE_HEIGHT, ImageFormat.JPEG, MAX_IMAGES);
            new ImageReader.OnImageAvailableListener() {
                public void onImageAvailable(ImageReader reader) {
                    // The image is available, process it here
                    Image image = reader.acquireLatestImage();
                    // Implement your logic to save or display the image

4. Real-time Camera Filters:

Adding real-time filters to the camera preview can significantly enhance the user experience. Let’s explore how to apply a simple color filter to the camera preview.

4.1. Applying a Color Filter:

To apply a color filter, we’ll use the RenderScript framework to perform efficient image processing operations.

private RenderScript mRenderScript;
private Allocation mInputAllocation;
private Allocation mOutputAllocation;
private ScriptIntrinsicColorMatrix mColorMatrixScript;

private void setupColorFilter() {
    mRenderScript = RenderScript.create(this);
    mColorMatrixScript = ScriptIntrinsicColorMatrix.create(mRenderScript, Element.U8_4(mRenderScript));
    mColorMatrixScript.setColorMatrix(new Matrix4f(new float[]{
            1.5f, 0.0f, 0.0f, 0.0f,
            0.0f, 1.5f, 0.0f, 0.0f,
            0.0f, 0.0f, 0.5f, 0.0f,
            0.0f, 0.0f, 0.0f, 1.0f,

    mInputAllocation = Allocation.createSized(mRenderScript, Element.U8_4(mRenderScript), IMAGE_WIDTH * IMAGE_HEIGHT * 4);
    mOutputAllocation = Allocation.createTyped(mRenderScript, mInputAllocation.getType());

4.2. Applying the Filter to the Camera Preview:

To apply the filter, we’ll intercept the captured frames from the camera preview and process them using the RenderScript.

private void processCameraFrame(Image image) {
    ByteBuffer buffer = image.getPlanes()[0].getBuffer();
    byte[] bytes = new byte[buffer.remaining()];


    mColorMatrixScript.forEach(mInputAllocation, mOutputAllocation);


    // Display the filtered preview here


In this blog, we explored the Android Camera API and learned how to build camera-enabled apps with features like capturing photos and applying real-time filters. The Android Camera API offers endless possibilities for creativity, enabling you to build captivating camera applications that redefine the way users capture and share their experiences. By leveraging the powerful features of the Camera API, you can deliver exceptional camera functionalities in your Android apps, delighting your users and making your app stand out from the crowd.

Now it’s your turn to experiment and explore further possibilities with the Android Camera API. Happy coding!

Previously at
Flag Argentina
time icon
Skilled Android Engineer with 5 years of expertise in app development, ad formats, and enhancing user experiences across high-impact projects