import {
ScreenCapturePickerView,
RTCPeerConnection,
RTCIceCandidate,
RTCSessionDescription,
RTCView,
MediaStream,
MediaStreamTrack,
mediaDevices,
registerGlobals
} from 'react-native-webrtc';
YOu'll only really need to use this funciion if you are mixing project development with libraries that use browser based WebRTC functions. Also applies if you are making your project compatible with react-native-web.
registerGlobals();
Here is a list of everything that will be linked up.
You can also find a shim for react-native-web over here.
navigator.mediaDevices.getUserMedia()
navigator.mediaDevices.getDisplayMedia()
navigator.mediaDevices.enumerateDevices()
window.RTCPeerConnection
window.RTCIceCandidate
window.RTCSessionDescription
window.MediaStream
window.MediaStreamTrack
Some devices might not have more than 1 camera. The following will allow you to know how many cameras the device has.
You can use enumerateDevices to list other media device information too.
let cameraCount = 0;
try {
const devices = await mediaDevices.enumerateDevices();
devices.map( device => {
if ( device.kind != 'videoinput' ) { return; };
cameraCount = cameraCount + 1;
} );
} catch( err ) {
// Handle Error
};
By default we're sending both audio and video.
This will allow us to toggle the video stream during a call.
let mediaConstraints = {
audio: true,
video: {
frameRate: 30,
facingMode: 'user'
}
};
If you only want a voice call then you can flip isVoisceOnly over to true.
You can then cycle and enable or disable the video tracks on demand during a call.
let localMediaStream;
let isVoiceOnly = false;
try {
const mediaStream = await mediaDevices.getUserMedia( mediaConstraints );
if ( isVoiceOnly ) {
let videoTrack = await mediaStream.getVideoTracks()[ 0 ];
videoTrack.enabled = false;
};
localMediaStream = mediaStream;
} catch( err ) {
// Handle Error
};
This will allow capturing the device screen, also requests permission on execution. Android 10+ requires that a foreground service is running otherwise capturing won't work, follow this solution.
try {
const mediaStream = await mediaDevices.getDisplayMedia();
localMediaStream = mediaStream;
} catch( err ) {
// Handle Error
};
Cycling all of the tracks and stopping them is more than enough to clean up after a call has finished.
localMediaStream.getTracks().forEach(
track => track.stop()
);
localMediaStream = null;
We're only specifying a STUN server but you should look at also using a TURN server. If you want to improve call reliability then check this guide.
let peerConstraints = {
iceServers: [
{
urls: 'stun:stun.l.google.com:19302'
}
]
};
Here, we're creating a peer connection required to get a call started.
You can also hook up events by directly overwriting functions instead of using event listeners.
let peerConnection = new RTCPeerConnection( peerConstraints );
peerConnection.addEventListener( 'connectionstatechange', event => {} );
peerConnection.addEventListener( 'icecandidate', event => {} );
peerConnection.addEventListener( 'icecandidateerror', event => {} );
peerConnection.addEventListener( 'iceconnectionstatechange', event => {} );
peerConnection.addEventListener( 'icegatheringstatechange', event => {} );
peerConnection.addEventListener( 'negotiationneeded', event => {} );
peerConnection.addEventListener( 'signalingstatechange', event => {} );
peerConnection.addEventListener( 'track', event => {} );
When ending a call you should always make sure to dispose of everything ready for another call.
The following should dispose of everything related to the peer connection.
peerConnection.close();
peerConnection = null;
After using one of the media functinos above, you can then add the media stream to the peer.
This negotiation needed event will be triggered on the peer connnection afterwards.
localMediaStream.getTracks().forEach(
track => peerConnection.addTrack( track, localMediaStream );
);
Usually the call initiliser would create the data channel but it can be done on both sides. The negotiation needed event will be triggered on the peer connection afterwords.
let datachannel = peerConnection.createDataChannel( 'my_channel' );
datachannel.addEventListener( 'open', event => {} );
datachannel.addEventListener( 'close', event => {} );
datachannel.addEventListener( 'message', message => {} );
The following event is for the second client, not the client which created the data channel. Unless you want both sides to create separate data channels.
peerConnection.addEventListener( 'datachannel', event => {
let datachannel = event.channel;
// Now you've got the datachannel.
// You can hookup and use the same events as above ^
} );
You can send a range of different data types over data channels, we're going to send a simple string.
Bear in mind that there are limits to sending large amounts of data which ins't usually advised.
datachannel.send( 'Hey There!' );
When the peer connection is destroyed, data channels should also be destroyed automatically. But as good practice, you can always close them yourself.
datachannel.close();
datachannel = null;
As mentioned above, by default we're going for the approach of offering both video and voice. That will allow you to enable and disable video streams on demand while a call is active.
let sessionConstraints = {
mandatory: {
OfferToReceiveAudio: true,
OfferToReceiveVideo: true,
VoiceActivityDetection: true
}
};
Executed by the call initialiser after media streams have been added to the peer connection. ICE Candidate creation and gathering will start as soon as an offer has been created.
try {
const offerDescription = await peerConnection.createOffer( sessionConstraints );
await peerConnection.setLocalDescription( offerDescription );
// Send the offerDescription to the other participant.
} catch( err ) {
// Handle Errors
};
All parties must ensure the proper handling of ICE Candidates. Otherwise, the offer-answer handshake stage might encounter some unexpected behavior.
try {
// Use the received offerDescription
const offerDescription = new RTCSessionDescription( offerDescription );
await peerConnection.setRemoteDescription( offerDescription );
const answerDescription = await peerConnection.createAnswer();
await peerConnection.setLocalDescription( answerDescription );
// Send the answerDescription back as a response to the offerDescription.
} catch( err ) {
// Handle Errors
};
Naturally, we assume you'll be using the front camera by default when starting a call.
So we set isFrontCam as true and let the value flip on execution.
let isFrontCam = true;
try {
// Taken from above, we don't want to flip if we don't have another camera.
if ( cameraCount < 2 ) { return; };
const videoTrack = await localMediaStream.getVideoTracks()[ 0 ];
videoTrack._switchCamera();
isFrontCam = !isFrontCam;
} catch( err ) {
// Handle Error
};
Once you've gained a local end/or remote stream then rendering it is as follows.
Don't forget, the user facing camera is usually mirrored.
<RTCView
mirror={true}
objectFit={'cover'}
streamURL={localMediaStream.toURL()}
zOrder={0}
/>
mirror: (boolean: false) Indicates whether the video specified by streamURL should be mirrored.
objectFit: (string: 'contain') Can be 'contain' or 'cover' nothing more or less.
streamURL: (string: 'streamURL') Required to have an actual video stream rendering.
zOrder: (number: 0) Similar to zIndex.