[Capstone] React Native Vision Camera

귤티·2023년 12월 7일

Capstone

목록 보기
10/17

Use the Camera View

만약 카메라와 마이크를 사용하기 위한 권한을 가지고 있다면, 단순히 useCameraDevice(...) hook을 사용해 camera device를 가져올 수 있다.

function App() {
  const device = useCameraDevice('back')

  if (device == null) return <NoCameraDeviceError />
  return (
    <Camera
      style={StyleSheet.absoluteFill}
      device={device}
      isActive={true}
    />
  )
}

Camera Devices

Camera Devices are the physical (or virtual) devices that can be used to recored videos or capture photos.

physical: A physical Camera Device is a camera lens on your phone.
Different physical camera devices have different specifications, such as different capture formats, resolutions, zoom levels, and more. Some phones have multiple physical Camera Devices.

virtual: A virtual camera device is a combination of one or more physical camera devices, and provides features such as virtual-device-switchover while zooming )see video on the right) or combined photo delivery from all physical cameras to produce higher quality images.

Select the default Camera

If you simply want to use the default CameraDevice, you can just use whatever is available:

const device = useCameraDevice('back')

And VisionCamera will automatically find the best matching CameraDevice for you

Custom Device Selection

For advanced use-cases, you might want to select a different CameraDevice for your app.
A CameraDevice consist of the following specifications:
id: A unique ID used to identify this Camera Device
position: The position of this camera Device relative to the phone
back: The camera Device is located on the back of the phone
frone: The camera Device is located on the front of the phone
external: The camera Device is an external device. These devices can be either:
USB Camera Devices (if they support the USB Video Class (UVC) Specification)
Continuity Camera Devices (e.g your iPhone's or Mac's Camera connected through WiFi/Continuity)
Bluetooth/WiFi Camera Devices (if they are supproted in the platform-native Camera APIs)
physicalDevices: The physical Camera Devices (lenses) this camera Device consist of. This can either be one of these values ("physical" device) or any combination of these values ("virtual" device):
ultra-wide-angle-camera: The "fish-eye" camera for 0.5x zoom
wide-angle-camera: The "default" camera for 1x zoom
telephoto-camera: A zoomed-in camera for 3x zoom
sensorOrintation: The orietation of the Camera sensor/lens relative to the phone. Cameras are usually in landscape-left orientation, meaning they are rotated by 90도. This includes their resolutions, so a 4k format might be 3840x2160, not 2160x3840
minZoom: The minimum possible zoom factor for this Camera Device. If this is a multi-cam, this is the point where the device with the widest field of view is used (e.g ultra-wide)
maxZoom: The maximum possible zoom factor for this camera device. If this is a multi-cam, this is the point where the device with the lowest field of view is used (e.g telephoto)
neutralZoom: A value between minZoom and maxZoom where the "default" Camera Device is used (e.g wide-angle). When using multi-cams, make sure to start off at this zoom level, so the user can optionally zoom out to the ultra-wide-angle Camera instead of already starting zoomed out
formats: The list of CameraDeviceFormats this camera device supports. A format specifies:
Video Resolution
Photo Resolution
FPS
Video Stabilization Mode
Pixel Format

Examples on an iPhone

Back Wide Angle Camera (['wide-angle-camera'])
Back Ultra-Wide Angle Camera (['ultra-wide-angle-camera'])
Back Telephoto Camera (['telephoto-camera'])
Back Dual Camera (Wide + Telephoto)
Back Dual-Wide Camera (Ultra-Wide + Wide)
Back Triple Camera (Ultra-Wide + Wide + Telephoto)
Back LiDAR Camera (Wide + LiDAR-Depth)
Front Wide Angle (['wide-angle-camera'])
Front True-Depth (Wide + Depth)

Selecting Multi-Cams

Multi-Cams are virtual Camera Devices that consist of more than one physical Camera Device. For example:
ultra-wide + wide + telephoto = "Triple-Camera"
ultra-wide + wide = "Dual-Wide-Camera"
wide + telephoto = "Dual-Camera"

Benefits of Multi-Cams:
Mylti-Cams can smoothly switch between the physical Camera Devices (lenses) while zooming.
Multi-Cams can capture Frames from all physical Camera Devices at the same time and fuse them together to create higher-quality Photos.

Downsides of Multi-Cams:
The Camera takes longer to initialize and uses more resources

To use the "Triple-Camera" in your app, you can just search for a device that contains all three physical Camera Devices:

const device = useCameraDevice('back', {
  physicalDevices: [
    'ultra-wide-angle-camera',
    'wide-angle-camera',
    'telephoto-camera'
  ]
})

This will try to find a CameraDevice that consists of all three physical Camera Devices, or the next best match (e.g. "Dual-Camera", or just a single wide-angle-camera) if not found. With the "Triple-Camera", we can now zoom out to a wider field of view

If you want to do the filtering/sorting fully yourself, you can also just get all devices, then implement your own filter:

const devices = useCameraDevices()
const device = useMemo(() => findBestDevice(devices), [devices])

Selecting external Cameras

VisionCamera supports using external Camera Devices, such as:
USB Camera Devices (if they support the USB Video Class(UVC) Specification)
Continuity Camera Devices (e.g your iPhone's or Mac's Camera connected through WiFi/Continuity)
Bluetooth/WiFi Camera Devices (if they are supported in the platform-native Camera APIs)

Since external Camera Devices can be plugged in/out at any point, you need to make sure to listen for changes in the Camera Devices list when using external Cameras:

The hooks (useCameraDevice(...)and use CameraDevices()) already automatically listen for Camera Device changes!

const usbCamera = useCameraDevice('external')

Lifecycle

The isActive prop

The Camera's isActive property can be used to pause the session (isActive={false}) while still keeping the sessiong "warm". This is more desirable than completely unmounting the camera, since resuming the session (isActive={true}) will be much faster than re-mounting the camera view.

For example, you might want to pause the Camera when the user navigates to another page or minimizes the app since otherwise the camera continues to run in the background without the user seeing it, causing significant battery drain. Also, on iOS a green dot indicates the user that the camera is still active, possibly causing the user to raise privacy concerns.

예를 들어, 유저가 앱을 최소화시키거나(useAppState()) 새 스크린으로 네비게이트할 때(useIsFocused()) 카메라를 잠시 멈추고 싶다면:

function App() {
  const isFocused = useIsFocused()
  const appState = useAppState()
  const isActive = isFocused && appState === "active"

  return <Camera {...props} isActive={isActive} />
}

Interruptions

VisionCamera gracefully handles Camera interruptions such as incoming calls, phone overheating, a different app opening the Camera, etc., and will automatically resume the camera once it becomes available again.

Camera Formats

What are camera formats?

Each camera device provides a number of formats that have different specifications. Tehre are formats specifically designed for high-resolution photo capture (but lower FPS), or formats that are designed for slow-motion video capture which have frame-rates of up to 240 FPS (but lower resolution).

What if I don't want to choose a format?

If you don't want to specify a Camera Format, you don't have to. The Camera automatically chooses the best matching format for the current camera device. This is why the Camera's format property is optional.

Choosing custom formats

To understand a bit more about camera formats, you first need to understand a few "general camera basics":
Each camera device is built differently, e.g. front-facing Cameras often don't have resolutions as high as the cameras on the back.
Formats are designed for specific use-cases, here are some examples for formats on a Camera Device:
4k Photos, 4k Videos, 30 FPS (high quality)
4k Photos, 1080p Videos, 60 FPS (high FPS)
4k Photos, 1080p Videos, 240 FPS (ultra high FPS/slow motion)
720p Photos, 720p Videos, 30PFS (smaller buffers/e.g. faster face detection)
Each app has different requirements, so the format filtering is up to you.

To get all available formats, simply use the CameraDevice's formats property. These are a CameraFormat's props:

photoHeight/photoWidth: The resolution that will be used for taking photos. Choose a format with your desired resolution.
videoHeight/videoWidth: The resolution that will be used for recording videos. Choose a format with your desired resolution.
minFps/maxFps: A range of possible values for the fps property. For example, if your format has minFps: 1 and maxFps: 60, you can either use fps={30}, fps={60} or any other value in between for recording videos.
videoStabilizationModes: All supported Video Stablization Modes, digital and optical. If this specific format contains your desired VideoStabilizationMode, you can pass it to your via the videoStabiliationMode property.
pixelFormats: All supported Pixel Formats. If this specific format contains your desired PixelFormat, you can pass it to your Camera via the pixelFormat property.
supportsVideoHdr: Whether this specific format supports true 10-bit HDR for video capture. If this is true, you can enable videoHdr on you Camera.
supportsPhotoHdr: wheter this specific format supports HDR for photo capture. It will use multiple captures to fuse over-exposed and under-exposed Images together to form one HDR photo. If this is true, you can enable photoHdr on your Camera.
supportsDepthCapture: Whether this specific format supports depth data capture. For devices like the TrueDepth/LiDAR cameras, this will always be true.
...and more. See the CameraDeviceFormat type for all supported properties.

You can either find a matching format manually by looping through your CameraDevice's formats property, or by using the helper functions from VisionCamera:

const device = ...
const format = useCameraFormat(device, [
  { videoResolution: { width: 3048, height: 2160 } },
  { fps: 60 }
])

The filter is ordered by priority (descending), so if there is no format that supports both 4k and 60 FPS, the functiong will prefer 4k@30FPS formats over 1080p@60FPs formats, because 4k is a more important requirement than 60 FPS.

If you want to record slow-motion videos, you want a format with a really high FPS setting, for example:

const device = ...
const format = useCameraFormat(device, [
  { fps: 240 }
])

If there is no format that has exactly 240 FPS, the closest thing to it will be used.
You can also use the 'max' flag to just use the maximum available resolution:

const device = ...
const format = useCameraFormat(device, [
  { videoResolution: 'max' },
  { photoResolution: 'max' }
])

Templates

For commont use-cases, VisonCamera also exposes pre-defined Format templates:

const device = ...
const format = useCameraFormat(device, Templates.Snapchat)

Camera Props

The Camera View probides a few props that depend on the specified format. For example, you can only set the fps prop to a value that is supported by the current format. So if you have a format that supports 240 FPS, you can set the fps to 240:

function App() {
  // ...
  const format = ...
  const fps = format.maxFps >= 240 ? 240 : format.maxFps

  return (
    <Camera
      style={StyleSheet.absoluteFill}
      device={device}
      format={format}
      fps={fps}
    />
  )
}

Other props that depend on the format:
fps: Specifies the frame rate to use
videoHdr: Enables HDR video capture and preview
photoHder: Enables HDR photo capture
lowLightBoost: Enables a night-mode/low-light-boost for photo or video capture and preview
videoStabilizationMode: Specifies the video stabilization mode to use for the video pipeline.
pixelFormat: Specifies the pixel format to use for the video pipeline

Taking Photos

Camera Functions

The Camera provides certain functions which are available through a ref object:

function App() {
  const camera = useRef<Camera>(null)
  // ...

  return (
    <Camera
      ref={camera}
      {...cameraProps}
    />
  )
}

To use these functions, you need to wait until the onInitialized event has been fired.

Taking Photos

To take a photo you first have to enable photo capture:

<Camera
  {...props}
  photo={true}
/>

Then, simply use the Camera's takePhoto(...) function:

const photo = await camera.current.takePhoto()

You can cunstomize capture options such as automatic red-eye reduction, automatic image stablization, enable flash, prioritize speed over quality, disable the shutter sound and more using the TakePhotoOptions parameter.

This functions returns a PhotoFile which is stored in a temporary directory and can either be displayed using Image or FastImage, uploaded to a backend, or saved to the Camera Roll using react-native-cameraroll.

Flash

The takePhoto(...) function can be configured to enable the flash automaticallt (when the scene is dark), always or never, which will only be used for this specific capture request:

const photo = await camera.current.takePhoto({
  flash: 'on' // 'auto' | 'off'
})

Note that flash is only available on camera devices where hasFlash is true; for example most front cameras don't have a flash.

Fast Capture

The takePhoto(...) function can be configured for faster capture at the cost of lower quality:

const photo = await camera.current.takePhoto({
  qualityPrioritization: 'speed',
  flash: 'off',
  enableShutterSound: false
})

Saving the Photo to the Camera Roll

Since the Photo is stored as a temporary file, you need to save it to the Camera Roll to permanentely store it. You can use react-native-cameraroo for this:

const file = await camera.current.takePhoto()
await CameraRoll.save(`file://${file.path}`, {
  type: 'photo',
})

Getting the Photo's data

To get the Photo's pixel data, you can use fetch(...) to read the local file as a Blob:

const file = await camera.current.takePhoto()
const result = await fetch(`file://${file.path}`)
const data = await result.blob();

Recording Videos

Camera Functions

The Camera provides certain functions which are available through a ref object:

function App() {
const camera = useRef<Camera>(null)
// ...

return (
  <Camera
    ref={camera}
    {...cameraProps}
  />
)
}

To use these functions, you need to wait until the onInitialized event has been fired.

Recording Videos

To start a video recording you first have to enable video capture:

<Camera
{...props}
video={true}
audio={true} // <-- optional
/>

Then, simply use the Camera's startRecording(...) function:

camera.current.startRecording({
onRecordingFinished: (video) => console.log(video),
onRecordingError: (error) => console.error(error)
})

You can customize capture options such as video codec, video bit-rat, file type, enable flash and more using the RecoredVideoOptions parameter.

For any error that occured while recording the video, the onRecordingError callback will be invoked with a CaptureError and the recording is therefore cancelled.

To stop the video recording, you can call stopRecording(...):

await camera.current.stopRecording()

Once a recording has been stopped, the onRecordingFinished callback passed to the stopRecording(...) function will be invoked with a VideoFile which you can then use to display in a Video component, uploaded to a backend, or saved to the Camera Roll using react-native-cameraroll.

Pause/Resume

To pause/resume the recordings, you can use pauseRecording() and resumeRecording():

await camera.current.pauseRecording()
...
await camera.current.resumeRecording()

Flash

The startRecording(...) function can be configured to enable the flash while recording, which natively just enables the torch under the hood:

camera.current.startRecording({
flash: 'on',
...
})

Note that flash is only available on camera devices where hasTorch is true; for example most front cameras don't have a torch.

Video Codec

By default, videos are recorded in the H.264 video codec which is a widely adopted video codec.

VisionCamera also supports H.265(HEVC), which is much more efficient in encoding performance and can be up to 50% smaller in file size. If you can handel H.265 on your backend, configute the video recorder to encode in H.265:

camera.current.startRecording({
...props,
videoCodec: 'h265'
})

Video Bit Rate

Videos are recorded with a target bit-rate, which the encoder aims to match as closely as possible. A lower bit-rate means less quality (and less file size), a higher bit-rate means higher quality (and larger file size) since it can assign more bits to moving pixels.
To simply recored videos with higher quality, use a videoBitRate of 'high', which effectively increases the bit-rate by 20%:

camera.current.startRecording({
...props,
videoBitRate: 'high'
})

To use a lower bit-rate for lower quality and lower file-size, use a videoBitRate of 'low', which effectively decreases the bit-rate by 20%:

camera.current.startRecording({
...props,
videoBitRate: 'low'
})

Custom Bit Rate

If you want to use a custom bit-rate, you first need to understand how bit-rate is calculated.

The bit-rate is a product of multiple factors such as resolution, FPs, pixel-format (HDR or non HDR), and video codec. As a good starting point, those are the recommended base bit-rates for their respective resolutions:
480p: 2 Mbps
720p: 5 Mpbs
1080p: 10 Mbps
4k: 30 Mbps
8k: 100 Mbps

These bit-rates assume a frame rate of 30 FPS, a non-HDR pixel-format, and a H.264 video codec.

To calculate your target bit-rate, you can use this formula:

let bitRate = baseBitRate
bitRate = bitRate / 30 * fps // FPS
if (videoHdr === true) bitRate *= 1.2 // 10-Bit Video HDR
if (codec === 'h265') bitRate *= 0.8 // H.265
bitRate *= yourCustomFactor // e.g. 0.5x for half the bit-rate

And then pass it to the startRecording(...) function (in Mbps):

camera.current.startRecording({
...props,
videoBitRate: bitRate // Mbps
})

Saving the Video to the Camera Roll

Since the Video is stored as a temporary file, you need save it to the Camera Roll to permanentely store it. You can use react-native-cameraroll for this:

camera.current.startRecording({
...props,
onRecordingFinished: (video) => {
  const path = video.path
  await CameraRoll.save(`file://${path}`, {
    type: 'video',
  })
},
})

Camera Errors

Why?

Since the Camera library is quite big, there is a lot that can "go wrong". VisionCamera provides throughly typed errors to help you quickly identify the cause and fix the problem.

switch (error.code) {
case "device/configuration-error":
  // prompt user
  break
case "device/microphone-unavailable":
  // ask for permission
  break
case "capture/recording-in-progress":
  // stop recording
  break
default:
  console.error(error)
  break
}

Troubleshooting

see Troubleshooting if you're having "weired issues".

The Error types

The CameraError type is a baseclass type for all other errors and provides the following properties:
code: A typed code in the form of {domain}/{code} that can be used to quickly identify and group errors
message: A non-localized message text that provides a more information and context about the error and possibly problematic values;.
cause?: An ErrorWithCause instance that provides information about the cause of the error. (Optional)
cause.message: The message of the error that caused the camera error.
cause.code?: The native error's error-code. (iOS only)
cause.domain?:The native error's domain. (iOS only)
cause.details?: More dictionary-style information about the cause. (iOS only)
cause.stacktrace?: A native Java stacktrace for the cause. (Android only)
cause.cause?: The cause that caused this cause. (Recursive) (Optional)

Runtime Errors

The CameraRuntimeError represents any kind of error that occured while mounting the Camera view, or an error that occured during the runtime.

The Camera UI component provides an onError function that will be invoked every time an unexpected runtime error occured.

function App() {
const onError = useCallback((error: CameraRuntimeError) => {
  console.error(error)
}, [])

return <Camera onError={onError} {...cameraProps} />
}

Capture Errors

The CameraCaptureError represents any kind of error that occrued only while taking a photo or recording a video.

function App() {
const camera = useRef<Camera>(null)

// called when the user presses a "capture" button
const onPress = useCallback(() => {
  try {
    const photo = await camera.current.takePhoto()
  } catch (e) {
    if (e instanceof CameraCaptureError) {
      switch (e.code) {
        case "capture/file-io-error":
          console.error("Failed to write photo to disk!")
          break
        default:
          console.error(e)
          break
      }
    }
  }
}, [camera])

return <Camera ref={camera} {...cameraProps} />
}

Mocking

Mocking VisionCamera

These steps allow you to mock VisionCamera and use it for developing or testing. BAsed on Detox Mock Guide

Configure the Metro bundler

In order to override React Native modules, allow bundler to use the flag RN_SRC_EXT to extend resolver.sourceExts, and then prioritize any given source extension over the default one.

Add to your Metro Config:

const { getDefaultConfig } = require("metro-config")
const { resolver: defaultResolver } = getDefaultConfig.getDefaultValues()

module.exports = {
...
resolver: {
  ...defaultResolver,
  sourceExts: [
    process.env.RN_SRC_EXT && process.env.RN_SRC_EXT.split(','),
    ...defaultResolver.sourceExts,
  ],
},
}

Create proxy for original and mocked modules

  1. Create a new folder vision-camera anywhere in your project.

  2. Inside that folder, create visiion-camera.js and vision-camera.e2e.js.

  3. Inside vision-camera.js, export the original react native modules you need to mock, and inside vision-camera.e2e.js export the mocked modules.

    In this example, several functions of the modules Camera and sortDevices are mocked. Define your mocks followiong the original definitions.

    // vision-camera.js

import { Camera, sortDevices } from 'react-native-vision-camera'

export const VisionCamera = Camera

// vision-camera.e2e.js

import React from 'react'
import RNFS, { writeFile } from 'react-native-fs'

console.log('[DETOX] Using mocked react-native-vision-camera')

export class VisionCamera extends React.PureComponent {
static getAvailableCameraDevices() {
return (
[
{
position: 'back',
},
]
)
}

static async getCameraPermissionStatus() {
return 'granted'
}

static async requestCameraPermission() {
return 'granted'
}

async takePhoto() {
const writePath = ${RNFS.DocumentDirectoryPath}/simulated_camera_photo.png

const imageDataBase64 = 'some_large_base_64_encoded_simulated_camera_photo'
await writeFile(writePath, imageDataBase64, 'base64')

return { path: writePath }

}

render() {
return null
}
}


These mocked modules allows us to get granted camera permissions, get one back camera available and take a fake photo, while the component doesn't render when instantiated.

## Use proxy module
Now that we have exported our native modules and our mocked modules from the same folder, we must reference the proxy module.

// before
import { Camera } from 'react-native-vision-camera'

// now
import { VisionCamera } from '/your_path_to_created_folder/vision-camera/vision-camera'



## Trigger
Start Metro bundler with provided flag for using .e2e.js files. Whenever Metro runs with RN_SRC_EXT environment varaible set, it will override the default files with the ones set in RN_SRC_EXT.

RN_SRC_EXT=e2e.js react-native start
RN_SRC_EXT=e2e.js xcodebuild
RN_SRC_EXT=e2e.js ./gradlew assembleRelease


On your simulator, with debug mode enabled, you should see "[DETOX] Using mocked react-native-vision-camera"
profile
취준 진입

0개의 댓글