Illustration by Virginia Poltrack

What’s new in CameraX

How to add advanced camera controls to your app.

Xi Zhang
Android Developers
Published in
6 min readFeb 24, 2020

--

Co-authored with Caren Chang, Developer Programs Engineer

This article is based on a presentation from the Android Dev Summit 2019 with updates to reflect the current state of the CameraX API. Watch the full presentation here:

The Camera2 API is powerful but it can be tricky to get the most out of it, especially due to the variety of camera capabilities offered by different devices such as HDR or night modes. To address this, at last year’s Google I/O, we announced CameraX a new Jetpack library designed to take the frustration out of adding camera features to apps.

As part of an effort to help developers more easily integrate camera features into their applications, the CameraX team focused on these key aspects:

  • New capabilities and APIs, to help you effortlessly enable more camera features in your apps. These now include support for tap-to-focus, zoom control, and device rotation information. This makes it easier to handle different configurations and flash ability based on lens — so you can query whether a camera lens has flash ability, and a lot more.
  • Widening the range of devices providing extension functions, so apps can make use of camera features such as night mode or HDR on more devices. At the time of writing, we have compatibility with phones from Samsung, LG, OPPO, Xiaomi, and Motorola (from Android 10).
  • Testing, focusing in particular on API consistency and stability, using a lab with 52 different device models from low- to high-end devices — representing over 200 million active devices.

As part of this work the CameraX team is working closely with the Lens Go team to understand how the library performs in the wild. Lens Go is an app that lets users point their camera at something — like an airport sign — analyze the image, and give feedback in real-time — such as a translation of the sign. This cooperation proved to be a good way to test how the CameraX library works, particularly on low-end devices — a key device segment for Lens Go. And, with millions of users using Lens Go every month across hundreds of devices, seeing how the CameraX library performs in Lens Go has helped a lot to deliver a more stable library.

One of the biggest benefits that the Lens Go team saw from integrating CameraX was a smaller APK size, because CameraX has been heavily optimized for performance and size. They were also able to ship features faster without having to maintain their own camera code.

Using CameraX

CameraX supports 3 use cases for the most common camera scenarios:

  • Preview, enabling you to include a viewfinder showing a live camera feed in your app.
  • Image Analysis, enabling you to access camera frame data to implement features such as object detection and augmented reality.
  • Image Capture, enabling you to take a picture and save it to disk.

For each use case, there are three setup steps: configuration, binding, and interaction.

To illustrate how this works, let’s take the example of image capture: taking a picture in your app.

The first step is to create an image capture use case. You can specify parameters such as the resolution of the picture. But, you don’t have to worry about the set resolution being available on your user’s device. If the device doesn’t support the requested resolution, CameraX simply falls back to the nearest resolution. This means that a configuration is always successful.

The second step is binding. There are different lifecycles to consider, such as the lifecycle of activities, the lifecycle of camera and capture session. By binding the use case to a LifecycleOwner, CameraX manages of all the lifecycles so you don’t have to manage the state machines yourself. For example, the camera is opened when it’s needed and released once done.

The final step is interaction. When you call takePicture your app will snap a picture.

So, with just a few lines of code, you have an image capture pipeline.

Advanced features

To give you some idea of the advanced features delivered by CameraX, let’s take a look at the camera control and camera info capabilities. These are high-level APIs that enable you to control the camera directly, independent of the use cases. This means that, if you have a preview and image capture use case, when you update the camera state, such as the zoom or the flashlight, it will update it for all the use cases.

Implementing tap to focus

CameraX supports autofocus, but in certain cases, you may want to give your user the ability to manually control the focus target.

If you do this with the Camera2 API you need to figure out the transformation between the UI (viewfinder) coordinates and the camera sensor (image) coordinates and specify the size of the focus area.

This is how you do it with CameraX:

First, you transform the coordinates by creating aDisplayOrientedMeteringPointFactory that takes in a Display, a CameraSelector andSurfaceView width/height. Then use it to convert the metering point in UI coordinates to normalized sensor coordinates.

Next, you create an action. To focus and meter at the same point, use FocusMeteringAction, passing the normalized metering point.

Finally, you give this action to cameraControl and CameraX handles the rest of the work.

Implementing pinch to zoom

Implementing pinch to zoom with Camera2 required you to figure out the transformation and crop region.

This is how you do it with CameraX:

To implement pinch to zoom we need two values: the base value and the delta value. The base value is the current zoom ratio, and the delta value is how much it’s been changed with the user’s pinch.

To get the delta value, create ScaleGestureDetector. This Android object converts a touch event into a scale factor. The scale factor here is the delta value.

Then, the base value is obtained from cameraInfo, the API for getting the status of camera features such as zoom ratio, flash availability, and sensor rotation degrees.

With these two values, now call size and ratio on the camera control. CameraX will figure out the crop region and send a request to the camera and that’s it, you have implemented pinch to zoom.

Implementing a zoom slider

To implement a zoom slider we need to take a different approach from pinch to zoom. The reason is that, say you have a camera that can zoom from ratios of 1 to 10, zooming from 1 to 2 shrinks the field of view by 50%. However, zooming from 9 to 10, although it’s the same distance on the slider, shrinks the field of view by 10%.

Implementing a zoom slider with Android logo

This isn’t the best user experience, which is why CameraX includes the setLinearZoom API. This API takes a slider value and does the necessary transformation to deliver linear zoom.

Implementing a zoom slider can, therefore, be done with one line of code:

Learn more

Hopefully, this has given you an insight into how easily the CameraX API lets you integrate camera features with your app. To find out more, check out the CameraX documentation. There, in addition to full API documentation, you will find an example app and hands-on codelab. You might also want to read some of our earlier blog posts: Core Principles Behind and CameraX Jetpack Library.

If you have any questions or feedback, post them on the CameraX Google group.

What do you think?

Do you have thoughts on CameraX? Let us know in the comments below or tweet using #AndroidStudio and we’ll reply from @AndroidDev, where we regularly share news and tips on how to be successful on Android.

--

--