WebCapture SDK - FaceAutocapture and Liveness 

Overview 

WebCapture SDK (FaceAutocapture and Liveness) is intended to be used by service providers to build identity proofing services for their users.

  • Biometric Services exposes a simple REST API to detect and recognize faces from still images.
  • WebCapture SDK (FaceAutocapture and Liveness) brings face and liveness detection from video streams.

WebCapture SDK (FaceAutocapture and Liveness) video adds the ability to detect faces and liveness from video streams, and relies on the Biometric Services core to:

  • Acquire a best-image from the video
  • Create a face resource from this best-image and add it to a bio-session

Note: A demo app is available to showcase the integration of IDEMIA Web CaptureSDK for IDEMIA Identity offer

Github repository: https://github.com/idemia/WebCaptureSDK
Section: Face autocapture with liveness detection

Requirements 

Minimal connectivity upload/download: 400 Kbps (means Wifi, 4G, regular 3G)

Maximal connectivity latency: 500ms

Minimal supported resolution: Video resolution HD (720 pixels * 1280 pixels)

Supported browsers:

  • ANDROID: Chrome 57+; FireFox 52+; Opera 55+; Samsung Internet 9+

  • iOS: Safari 11+

WebCam:

WebCam are supported. But as the WebCam average quality is below smartphone camera quality, we do have following limitations :

  • security : similar fraud detection rate than smartphone camera. Choice is being driven by the security
  • degraded passrate : there are about twice more rejects than with a smartphone camera, depending on WebCam quality

Services 

Biometric WebCapture SDK is a JavaScript SDK that permits the autocapture of high-quality selfie images and performs liveness verification through a web browser. No browser extension is required.

The computation is done within the back end. Only minimal resources from the user's smartphone are required.

Autocaptured images can then be matched using Biometric Services that are part of IDEMIA's overall solutions.

Biometric WebCapture SDK allows the following:

  • Provides dynamic guidance to the user in order to ensure a good quality image

  • Detects whether the web browser is compatible

  • Monitors the connectivity during the transaction

Liveness Possibilities 

Passive Liveness 

Passive liveness verifies the user's liveness without requiring the user to move their head or face. This allows the user to experience a frictionless experience.

This process is compatible with high-end mobile phones, average mobile phones, and some older model or more basic mobile phones.

passive

PAD evaluation is done through an independent lab according to ISO/IEC 30107-3. Click these links for more information:

Active Liveness 

Active liveness verifies the user's liveness while the user is moving their head. The user is requested to perform a challenge by moving their head to follow a series of displayed dots on the screen, as one dot appears after another. (The user must follow the displayed dots correctly with their head.)

This process is compatible with high-end mobile phones, average mobile phones, and some older model or more basic mobile phones.

PAD evaluation is done through an independent lab according to ISO/IEC 30107-3. Click these links for more information:

high

Getting Started 

Biometric WebCapture SDK is intended to be used by service providers to build identity proofing services for their users. It is a JavaScript SDK hosted within a back end server. This SDK allows face and liveness detection from video streams.

The main services are:

  • Acquiring a best-image from a video stream
  • Performing a liveness check to verify that the acquired FACE is genuine and not a photocopy, video, or mask

JavaScript Files SDK 

This SDK is not a set of tools to download, but rather JavaScript files that are to be integrated into a client web application.

To include the JavaScript files in the main HTML page of the client application:

  • Use a script tag in the HTML header for each JavaScript file

  • Set the scr attribute to the .js file location

  • Environment Detection

HTML
1<script src="$URL-WBS/video-server/bioserver-environment-api.js"></script>

This detects if the current environment (OS/browser) is supported. If the environment is not supported, the response contains a list of supported browsers according to the current OS (parameter supportedBrowser).

For more details, please refer to : EnvironmentDetection

  • Network Check
HTML
1<script src="$URL-WBS/video-server/bioserver-network-check.js"></script>

This JavaScript library allows the ability to check user connectivity requirements for video capture, by calculating latency and upload speeds.

For more details, please refer to : NetworkCheck

  • UI Extension
HTML
1<script src="$URL-WBS/video-server/bioserver-video-ui.js"></script>

This is the JavaScript library of the user interface management that allows the ability to customize the HTML elements associated with the capture and challenge instructions.

For more details, please refer to : UIExtensions

  • Face Capture
HTML
1<script src="$URL-WBS/video-server/bioserver-video-api.js"></script>

This is the Javascript library that allows the ability to retrieve the user's camera from a browser and perform real-time communication using webrtc and websocket.

For more details, please refer to : FaceCapture

Liveness modes 

  • Liveness Passive

The liveness mode is LIVENESS_PASSIVE. It means a liveness check on a single best image without a challenge. Only biometric passive liveness and spoof detection are done.

LIVENESS_PASSIVE
  • Liveness Passive Video

The liveness mode is LIVENESS_PASSIVE_VIDEO. It means a liveness check on the whole video without a challenge. Only biometric passive liveness and spoof detection are done.

  • Liveness Medium (Deprecated)

Deprecated

  • Liveness High

The liveness mode is LIVENESS_HIGH. Biometric active liveness and spoof detection are done. The user must meet the challenge Joining the dots. The user interacts with Biometrics Web Server by following challenge instructions on the screen.

LIVENESS_HIGH

Integrate Sample App 

As an integrator, you can follow the three steps below. The process will take approximately 15 minutes to test and use the Biometric WebCapture SDK through our sample client application.

1. Requirements:

Required Systems
  • Linux or Windows OS

  • Memory: At least 8GB of RAM

  • CPU: CPU 2.5 GHz

Install Node.js

To facilitate integration with the Biometric Services SDK, we provide a web application in source code as an integration good practice example.

This sample application is developed in Node.js. To use it, install Node.js as shown below:

Integration Environment

In order to start the integration, you need an API key and a sandbox environment. You can obtain these by registering at https://experience.idemia.com/auth/signup/.

Within the dashboard API_KEY, the required values are:

  • Address: the backend URL
  • WEBBIO-VIDEO API Key: the APIKEY value

2. Deploy Sample App

  1. Download the latest sample web application from github repository.

Github repository: https://github.com/idemia/WebCaptureSDK
Section: Face autocapture with liveness detection

  1. Unzip the archive and go to the root folder.

  2. Edit the file '/server/config/default.js' and update the configuration variable to set your environment (credentials and Biometrics Services url).

  3. Add your API key by filling the WEB_SDK_LIVENESS_ID_DOC value.

  4. Modify Biometric Services with your url (see Environment value in https://experience.idemia.com/dashboard/my-identity-proofing/access/environments/): BIOSERVER_CORE_URL for the Biometric API and BIOSERVER_VIDEO_URL for the Biometric SDK.

Shell
1BIOSERVER_CORE_URL: 'https://XXXXXXXXXX/bioserver-app/v2',
2BIOSERVER_VIDEO_URL: 'https://XXXXXXXXXX',
  1. Create a TLS keypair and certificate: You can also convert an existing key/certificate in PEM format into PKCS#12 file format, or use an existing one. Then fill the values in 'server/config/defaults.js' with the corresponding location and password. You can go to section called How to generate a self-signed certificate for more help.

    Example:

Shell
1TLS_KEYSTORE_PATH: path.join(__dirname, 'certs/demo-server.p12'),
2TLS_KEYSTORE_PASSWORD: '12345678',

3. Run and Test Sample App

  1. Open a terminal to the root folder

  2. Launch following command to load the dependencies

Shell
1npm install --verbose
  1. Launch following command to run the sample application
Shell
1npm run start

Now you can open a browser and run:

https://localhost:9943/demo-server/

For the best quality, use a smartphone connected through the same network without the firewall : https://IP_ADDRESS:9943/demo-server/.

For testing sample source code from GitHub with an Android phone please consult FAQ section.

Use Case 1: Only Biometrics Required

The provided sample is ready to be used. No further modifications are required.

Use Case 2: Integration with ID&V Global
  1. If you want to link Biometric Services with ID&V/GIPS, edit the file /server/config/default.js and update also the variables as follows:
  • set IDPROOFING to true

  • set GIPS_URL to the URL you received

  • set GIPS_RS_API_Key with the API key header to use

  1. Open a terminal to the root folder

  2. Launch following command to load the dependencies

Shell
1npm install --verbose
  1. Launch following command to run the sample application
Shell
1npm run start
  1. Now you can open a browser and run:

https://localhost:9943/demo-server/

For testing sample source code from GitHub with an Android phone please consult FAQ section.

Configuration Variables 

Parameters for Changing Liveness Mode

Variable
Description
Value
LIVENESS_MODEThe liveness capture mode. Determines the type of capture and liveliness control to be performed on the video stream.Allowed values: NO_LIVENESS, LIVENESS_PASSIVE, LIVENESS_PASSIVE_VIDEO, LIVENESS_HIGH
LIVENESS_HIGH_NUMBER_OF_CHALLENGENumber of dots generated for « join the dots » challenge. Only applies when LIVENESS_MODE is set to LIVENESS_HIGH2

Configuration Variables for Changing Security/Usability Compromise

Variable
Description
Value
LIVENESS_SECURITY_LEVELSpecify the level of security. Higher level makes a smaller target.Allowed values: LOW, MEDIUM,HIGH. Other values are deprecated.

Other Configuration Variables

The table shows other configuration variables used for the autocapture.

Variable
Description
Value
DISABLE_CALLBACKDisables the callback functionality from WebBioServertrue
SERVER_PUBLIC_ADDRESSSample page public address. Used to callback the sample page when the liveness capture is finished.https://[ip_or_servername]:[port]. Ex: https://localhost:9943
LIVENESS_RESULT_CALLBACK_PATHUsed in the callback URL to receive liveness result from the WebBioServer/liveness-result-callback
BIOSERVER_CORE_URLWBS core URL for images coding and matching. WBS exposes a simple REST API to detect and recognize faces from still images. It also exposes rest API to save and retrieve the liveness capture result in a session. This server is used by the WebCapture SDK for the coding captured best image and to save and retrieve the liveness capture result in a session.https://[ip_or_servername]:[port]/bioserver-app/
https://localhost/bioserver-app/
BIOSERVER_VIDEO_URLWebCapture SDK server URLhttps://[ip_or_servername]:[port]/
https://localhost:9443
WEB_SDK_LIVENESS_ID_DOCAPI key value sent via API_KEY_HEADER********************
IDPROOFINGTo link sample application server with gipsfalse
GIPS_URLID&V gips API URLhttps://[ip_or_servername]:[port]/gips/rest
GIPS_RS_API_KeyAPI key value sent to ID&V********************

Description of the files from source code:

Filename
Description
./index.jsNodeJS index file that initialize front-end endpoints and call the file ''./server/httpEndpoints.js" for back-end endpoints
./package.jsonnodeJS dependencies
./GettingStarted.mdReadme markdown file
./assets/*Contains a video tutorial for liveness high
./licensesLicenses from the demonstration project
./serverBack-end side package
./server/wbs-api.jsAllow communication with WebBioserver API
./server/packer.jsPrepare the front-end source to be exposed
./server/httpEndpoints.jsBack-end endpoint (used by the front end to reach GIPS and WebBioserver)
./server/gips-api.jsAllow communication with GIPS API
./server/config/index.jsRead the Server configuration file and set defaults keys
./server/config/defaults.jsServer configuration file
./server/config/certs/*Procedure for TLS certificate generation
./server/config/i18n/*Translation files (spanish / french / japanese)
./frontFront-end side package
./front/utils/*Common resources called by front-end JS
./templatesFront-end sources divided by each supported liveness mode
./templates/high-liveness/index.jsUnique High liveness javascript. All the JS source code to integrate the high liveness is present here.
./templates/high-liveness/index.htmlUnique High liveness html. All the html source code to integrate the high liveness is present here.
./templates/high-liveness/home.htmlHome page for high liveness that expose only links to the main high index.html page
./templates/high-liveness/staticsAssets: images, logo, fonts, css for high liveness
./templates/high-liveness/animationsJSON animation files (alternative to .gif) for high liveness
./templates/passive-liveness/index.jsUnique passive liveness JavaScript. All the JS source code to integrate the passive liveness is present here.
./templates/passive-liveness/index.htmlUnique passive liveness HTML. All the HTML source code to integrate the passive liveness is present here.
./templates/passive-liveness/home.htmlHome page for passive liveness that expose only links to the main passive index.html page
./templates/passive-liveness/staticsAssets : images, logo, fonts, css for passive liveness
./templates/passive-liveness/animationsJSON animation files (alternative to .gif) for passive liveness
./templates/passive-video-liveness/index.jsUnique passive video liveness JavaScript. All the JS source code to integrate the passive video liveness is present here.
./templates/passive-video-liveness/index.htmlUnique passive video liveness HTML. All the HTML source code to integrate the passive video liveness is present here.
./templates/passive-video-liveness/home.htmlHome page for passive video liveness that expose only links to the main passive index.html page
./templates/passive-video-liveness/staticsAssets : images, logo, fonts, css for passive video liveness
./templates/passive-video-liveness/animationsJSON animation files (alternative to .gif) for passive video liveness

Use Cases 

The two use cases for liveness detection and their corresponding UML diagrams follow.

***Note: *** These use cases refer to comparisons with a reference image. The reference face image is any face image previously acquired which can be a: • Face image extracted from the identity document, either from the scan of the identity document or from the NFC chip on a passport. • Face stored with a system of record (SOR), such as a driver's license.

Use Case 1: Liveness Detection and Matching Use Case 

API UML Diagram

The API UML diagram for the liveness detection and matching use case is shown.

Use Case Overview

This use case consists of determining that the user interacting with the application is a physically present human being and not an animated artifact:

  • If the liveness check is successful, the extracted portrait can be compared to a reference image.

  • A Service Provider (SP) is an entity developing applications and use cases on top of the Biometric WebCapture Server.

  • The WebCapture Server doesn't know the users and doesn't keep any user's data. Users are managed by the SP.

API Process Steps

Step 1: Load web application with WebCapture JavaScript SDK

This step is described on the API UML Diagram in lines 1 to 4 above:

  • A user is asked for a face biometric authentication via a web application developed by SP.

  • The user launches the web application with a compatible browser.

By this action, all the JavaScript libraries required to interact with the web capture server are loaded in the browser and become ready to use as described in the section below:

HTML
1<script src="$URL-WBS/video-server/bioserver-video-api.js"></script>
2 <script src="$URL-WBS/video-server/bioserver-environment-api.js"></script>
3 <script src="$URL-WBS/video-server/bioserver-network-check.js"></script>
4 <script src="$URL-WBS/video-server/bioserver-video-ui.js"></script>
Step 2: Initialize a liveness session

This step is described on the API UML Diagram in lines 5 to 11 above:

  • The user asks for a face liveness capture session.

  • The web application of SP handles the request and uses Rest API initLivenessSession of the Biometric WebCapture Server.

This request creates a new session with the liveness verification settings.

Step 3: Retrieve a video stream

This step is described on the API UML Diagram in line 13 above:

  • The user uses the SDK JavaScript function to retrieve a video stream of the selected device.

  • getMediaStream is a JavaScript function executed in the browser that requests access to the given audio-input and camera devices and returns the associated media stream.

  • When opening a media stream a specific configuration can be applied to define capture conditions such as camera resolution and frame rate.

Step 4: Initialize a face capture

This step is described on the API UML Diagram in line 14 above:

  • The user uses the SDK JavaScript function to initialize a face capture client.

  • initFaceCaptureClient is a JavaScript function executed in the browser that creates a capture client with a specific configuration that determines the behavior of the client when certain events occur during the capture.

    These events can be:

    • Tracking events that trace the position of the end user's face
    • Instructions for completing a challenge
    • End of capture event
    • Error events
  • The face capture client is composed of a websocket client and a webrtc client used for real time communication.

Step 5: Start the face capture

This step is described on the API UML Diagram after the note Send video stream:

  • The returned face capture client allows the ability to start and stop the face capture on a given video stream, catch face tracking info, manage challenges, and handle errors.

  • The start JavaScript function is used to start the capture by establishing a peer-to-peer communication between the client (browser) and the server located in the Capture server.

Step 6: Complete the challenge by following the server instructions

This step is described on the API UML Diagram after the note Send video stream. Depending on the verification level configured, instructions are sent back to the user to perform challenges.

Step 7: End the capture process

This step is described on the UML API Diagram in lines 18 to 22 above. The capture can end in several ways:

  • The liveliness verification is completed (success or failure) on the server side. The server stops the process and sends a 'stop video capture' message to the client.

  • The capture timeout is reached and then the server stops the process and sends a stop video capture message to the client.

  • The client can then use the stop JavaScript function to stop the communication and close the camera.

Step 8: Ask for a liveness detection result

This step is described on the UML API Diagram in lines 23 to 30 above. To retrieve the result of the capture and liveness check, two modes are available:

  • Polling on Biometric Services Rest API: getLivenessChallengeResult URL.

  • Using Biometric Services WebHook: After the capture is done, the SP's server will receive a notification indicating the result is available.

Retrieving the Capture

The SP's server uses the Biometric Services Rest API getLivenessChallengeResult URL to retrieve the capture, and RHWN presents it to the user.

Returning the Results

At the end of the capture, if the verification was successful, the server returns the following to the SP:

  • The result of the biometric liveness verification (SUCCESS, FAILED, SPOOF, ERROR, TIMEOUT)

    • SUCCESS: the liveness test completed.
    • FAILED: the liveness test did not complete; a technical error occurred.
    • ERROR: the liveness test did not complete; a technical error occurred.
    • SPOOF: the liveness test was not a success; a deception (spoof) was suspected.
    • TIMEOUT: the liveness test was not completed within the time permitted.
  • The identifier of the best captured image and whether the verification was successful

Step 9: Ask for the best face image captured

The Service Provider's server can use the Biometric Services Rest API getFaceImage

  • getFaceImage: retrieves the best image captured and stored into Biometric service session as the face resource. This step is described on the sequence diagram after the lines 31 to 33.
Step 10: Match the best image against the reference image

This step is described on the API UML Diagram in lines 34 to 36 above:

  • In addition to face detection, there is the possibility to verify an identity by using biometric matching between the captured face and the reference portrait.

  • The SP can authenticate a captured image by matching it against a reference image from a database or a selfie captured online.

    This uses the Biometric Services Rest API below:

    • getMatches: the reference face is compared to the captured image created in the Biometric service session. The result of the comparison is called a “match”.
    • The match is composed of the reference face, a candidate face, a matching score, and a false acceptance rate.
    • The check is successful if the matching score is above a threshold defined by configuration.
FAR Threshold
  • The recommended threshold for the selfie/selfie matching is: 3000, 3500, or higher depending on the use case.
  • The threshold that you want to use is driven by the expected FAR (False Acceptance Rate) as shown in the table below.
FAR
Matching threshold
0.0001%4500
0.001%4000
0.01%3500
0.1%3000
1%2500

For more information regarding False Acceptance Rate and False Rejection Rate, see Face Matching Configuration.

Web Service Calls

This section of the document is a short description of the web services called in the current use case. There are several ways to make the appropriate web service calls.

These samples focus on the use of cURL requests:

Init Liveness Session

initLivenessSession

Get Liveness Challenge Result

getLivenessChallengeResult

Get Face Image

getFaceImage

Get Matches

getMatches

JavaScript Function Calls for Use Case

This section of the document is a short description of the JavaScript function called in the current use case. Details about all the JavaScript function calls are available in the JavaScript API documentation section.

initMediaDevices (Deprecated)

Deprecated

getDeviceStream (Deprecated)

Deprecated

initFaceCaptureClient
JavaScript
1BioserverVideo.initFaceCaptureClient

This function initializes a face capture client with the given configuration. The returned client will let you start and stop the face capture on a given video stream, catch face tracking info, manage challenges, and handle errors.

Example Request

The request to capture the HD video stream (without audio) for the default camera device is shown in the snippet:

JavaScript
1const faceCaptureOptions = {
2 wspath: 'video-server/engine.io',
3 bioserverVideoUrl: '$URL-WBS',
4 rtcConfigurationPath: '$URL-WBS/video-server/coturnService?bioSessionId=' + encodeURIComponent(sessionId),
5 bioSessionId: sessionId,
6 onClientInitEnd: () =\ { console.log("Init ended. Remove loading for video" },
7 trackingFn: (trackingInfo) =\ {console.log("onTracking", trackingInfo)},
8 errorFn: (error) =\ {console.log("face capture error", error)},recordedVideoFile)},
9 showChallengeInstruction: (challengeInstruction) =\ {console.log("challenge
10 instructions", challengeInstruction)},
11 showChallengeResult: () =\ { console.log("call back the backend to retrieve
12 liveness result"); }
13};
14
15const faceCaptureClient = await BioserverVideo.initFaceCaptureClient(faceCaptureOptions);
Parameters

The parameters used are described below. Details about the parameters description are available in the Javascript API section.

Field
Type
Description
rtcConfigurationPathStringendpoint from bioserver (no apikey protected) to retrieve credentials of turnserver in order to use webRTC functionnality.
bioserverVideoUrlStringThe websocket URL used to communicate with server via websockets.
wspathStringThe webSocket URL used to communicate with the server via websockets. Example:"/myserver/wsocket"Allowed values: "/video-server/engine.io".
bioSessionIdStringThe bio-session 'id' in which the user images will be temporarily stored during the capture process.
onClientInitEndFunctionThe callback notifies the end of the initialization.
trackingFnFunctionThe callback that handles the face tracking information per frame. This callback is fired on each video frame with face tracking information.
showChallengeInstructionFunctionThe callback to handle challenge instructions. This callback function is fired only if the liveness check (LIVENESS_HIGH set by integrator) is requested and only when the user face is detected.
showChallengeResultFunctionThis callback is fired once the challenge is done. The results have to be requested by the service provider (SP).
errorFnFunctionThe callback to handle video capture errors. This callback function is fired when an error happens during the capture process.
Tracking Info
Field
Type
Description
phoneNotVerticalBooleanPhone position is not correct.
tooCloseBooleanPhone is too close.
tooFarBooleanPhone is too far.
facehIntegerIf faceh === 0, user is not moving his head or moving his phone
facewIntegerIf facew === 0, user is not moving his head or moving his phone
livenessHigh.stillFaceBooleanUser is not moving his head.
livenessHigh.movingPhoneBooleanUser is not moving his phone.
livenessHigh.positionInfoStringInstructions to the user.

Instructions from livenessHigh.positionInfo are for example :

Enumeration
Description
TRACKER_POSITION_INFO_MOVE_BACK_INTO_FRAMENo head detected.
TRACKER_POSITION_INFO_STAND_STILLStand still.
TRACKER_POSITION_INFO_CENTER_MOVE_BACKWARDSMove away from the camera.
TRACKER_POSITION_INFO_CENTER_MOVE_FORWARDSMove closer to the camera.

For more information, please consult the demo on gitHub : https://github.com/idemia/WebCaptureSDK

Example Response

The returned face capture client will let you start and stop the face capture on a given video stream, catch face tracking info, manage challenges, and handle errors.

JavaScript
1const faceCaptureClient = await BioserverVideo.initFaceCaptureClient(faceCaptureOptions);
2// faceCaptureClient = {
3// start : function(stream: MediaStream),
4// cancel : function()
5// }
start

The start JavaScript function of the capture client allows the ability to start the capture by establishing a peer-to-peer communication between the client (browser) and the WebCapture server.

JavaScript
1BioserverVideo.start
Example Request

The start request for the face capture is shown in the snippet:

JavaScript
1// start face capture (ex: when user click on capture button)
2
3faceCaptureClient.start(videoStream);
Example Response

This function has no return type.

Calling this function starts the webrtc and web socket communication. In case of an error, the event is captured and processed, an appropriate message is displayed, and the capture is stopped.

Use Case 2: Liveness Detection with ID&V GIPS (Identity Documentation Capture and Verification) Use Case 

ID&V offers a global identity service for capturing and validating a user's portrait. This service:

  1. Captures the user's portrait during a video stream
  2. Verifies that the user is a live person
  3. Verifies that the face corresponds to the face that is displayed on a reference identity document (evidence). That reference identity document will have been previously verified by the service.

The liveness portrait video capture uses the WebCapture SDK for face and liveness detection:

  • The liveness portrait video is acquired from the browser

  • The liveness capture with Challenge/Response is performed (user has to move their head with movement determined by the service provider)

  • The best portrait image is extracted

This best image will be used internally in ID&V, in the same way that a selfie capture image for biometric user verification is used during the ID&V biometric matching.

Requirements

To execute the scenarios, the client application needs API Keys and URLs to access the ID proofing service and the Biometric WebCapture Server:

  • GIPS-RS key for back-end–to–back-end communication
  • GIPS-UA key for the user-facing application to ID Proofing back-end communication
  • An API key and a URL to access the WebCapture Server
  • An API key and a URL to access the Biometric Services REST API.

See the provided sample web application in Getting Started for more details.

Details about the Identity Verification with the ID&V service are available in the Identity Document Capture and Verification (ID&V) Guide.

API UML Diagram

The API UML diagram below details how a client application can verify an identity document and a user's portrait using the Biometric WebCapture Server to verify the liveness of the user's portrait.

There are two ways of capturing a self-portrait image for an individual:

  • Selfie capture
  • Liveness video capture

API Process Steps

Step 1: Load the client application with the WebCapture JavaScript SDK and ID&V REST service client

This step is described on the sequence diagram above by lines 1 to 4:

  • A user is asked for a face biometric authentication via a web application developed by the Service Provider (SP).

  • The user launches the web application with a compatible browser.

By this action, all the JavaScript libraries required to interact with the web capture server are loaded in the browser and become ready to use as described in the section below:

HTML
1<script src="$URL-WBS/video-server/bioserver-video-api.js"></script>
2 <script src="$URL-WBS/video-server/bioserver-environment-api.js"></script>
3 <script src="$URL-WBS/video-server/bioserver-network-check.js"></script>
4 <script src="$URL-WBS/video-server/bioserver-video-ui.js"></script>
Step 2: Start the identity proofing on the ID&V server

This step is described on the sequence diagram by lines 5 to 14 as shown in the sections below:

  • Create Identity

    This creates an identity on the ID&V server that will receive all of the data and gather the verification results related to this identity.

  • Submit Consent

    This notifies the ID proofing service of the different verifications the user has consented to. In this case, a biometric verification.

  • Start Liveness Session

    The client application sends a request to ID&V to start a live video capture. ID&V will ask for a session creation on the Biometrics Server via the Rest API. The stage of face detection and liveliness verification from video streams can begin.

Step 3: Initialize a liveness session

This step is described on the sequence diagram by lines 15 to 18:

  • The user asks for a face liveness capture session.

  • The web application of the SP handles the request and uses the Rest API initLivenessSession of the Web Capture server.

  • This request creates a new session with the liveness verification settings.

Step 4: Retrieve the video stream

This step is described on the sequence diagram by line 20:

  • The user uses the SDK JavaScript function to retrieve the video stream of the selected device.

  • getDeviceStream is a JavaScript function executed in the browser that requests access to the given audio-input/camera devices and returns the associated media stream.

  • When opening the media stream, a specific configuration can be applied to define capture conditions such as the camera resolution and frame rate.

Step 5: Initialize a face capture

This step is described on the sequence diagram by line 21:

  • The user uses the SDK JavaScript function to initialize a face capture client.

  • initFaceCaptureClient is a JavaScript function executed in the browser that creates a capture client with a specific configuration that determines the behavior of the client when certain events occur during a capture.

These events can be:

  • Tracking events that trace the position of the end user's face
  • Instructions for completing a challenge
  • End of capture event
  • Error events

The face capture client is composed of a websocket client and a webrtc client used for real time communication.

Step 6: Start a face capture

This step is described on the sequence diagram by the note 'send video stream':

  • The returned face capture client allows the ability to start and stop the face capture on a given video stream, catch face tracking info, manage challenges, and handle errors.

  • The start JavaScript function is used to start the capture by establishing a peer-to-peer communication between the client (browser) and the server located in the Web Capture server.

Step 7: Complete the challenge by following the server instructions

This step is described on the sequence diagram after the note 'send video stream'.

Depending on the verification level configured, instructions are sent back to the user to perform challenges.

Step 8: Ask for the face and liveness detection result

To retrieve the result of capture and liveness check, two modes are proposed:

  • Polling on the ID&V Rest API Get portrait status URL.

  • Using ID&V WebHook feature: after the capture is done, the SP server will receive a notification indicating the result is available.

The client application uses the ID&V Rest API Get portrait status URL to retrieve the capture results and presents it to the user.

At the end of the capture, if the verification was successful, the server returns to the client application:

  • The result of the biometric liveness verification
  • and the identifier of the portrait captured and whether the verification was successful.
Step 9: Ask for the best portrait captured

The client application uses the ID&V Rest API Get Portrait capture to retrieve the best image captured and stored into the ID&V identity related to the user.

Use Case Web Service Calls

This section is a short description of the web services called in the current use case.

There are several ways to make the appropriate web service calls. These samples focus on the use of cURL requests.

Init Liveness session

initLivenessSession

Get Liveness Challenge Result

getLivenessChallengeResult

Get Face Image

getFaceImage

Get Matches

getMatches

JavaScript Function Calls

This section of the document is a short description of the JavaScript functions called in the current use case. Details about all the JavaScript function calls are available in the JavaScript API documentation section.

BioserverVideo.initMediaDevices (Deprecated)

Deprecated

BioserverVideo.getDeviceStream (Deprecated)

Deprecated

BioserverVideo.initFaceCaptureClient

This function initializes a face capture client with the given configuration. The returned client will let you start and stop the face capture on a given video stream, catch face tracking info, manage challenges, and handle errors.

Language not specified
1BioserverVideo.initFaceCaptureClient
Example Response

This snippet for BioserverVideo.initFaceCaptureClient initializes the face capture client:

JavaScript
1const faceCaptureOptions = {
2 wspath: 'video-server/engine.io',
3 bioserverVideoUrl: '$URL-WBS',
4 rtcConfigurationPath: '$URL-WBS/video-server/coturnService?bioSessionId=' + encodeURIComponent(sessionId),
5 bioSessionId: sessionId,
6 trackingFn: (trackingInfo) =\ {console.log("onTracking", trackingInfo)},
7 errorFn: (error) =\ {console.log("face capture error", error)},recordedVideoFile)},
8 showChallengeInstruction: (challengeInstruction) =\ {console.log("challenge
9 instructions", challengeInstruction)},
10 showChallengeResult: () =\ { console.log("call back the backend to retrieve
11 liveness result"); }
12};
13
14const faceCaptureClient = await BioserverVideo.initFaceCaptureClient(faceCaptureOptions);
Parameters

The parameters used are described in the table. Details about the parameters description are available in the Javascript API section.

Field
Type
Description
rtcConfigurationPathStringendpoint from BioServer (no API key protected) to retrieve credentials of the TURN server in order to use webRTC functionality.
bioserverVideoUrlStringThe websocket url used to communicate with server via websockets.
wspathStringThe webSocket URL used to communicate with the server via websockets. Example:"/myserver/wsocket"Allowed values: "/video-server/engine.io".
bioSessionIdStringThe bio-session 'id' in which the user images will be temporarily stored during the capture process.
trackingFnFunctionThe callback that handles the face tracking information per frame. This callback is fired on each video frame with face tracking information.
showChallengeInstructionFunctionThe callback to handle challenge instructions. This callback function is fired only if the liveness check (LIVENESS_HIGH set by integrator) is requested and only when the user face is detected.
showChallengeResultFunctionThis callback is fired once the challenge is done. The results have to be requested by the service provider (SP).
errorFnFunctionThe callback to handle video capture errors. This callback function is fired when an error happens during the capture process.
Tracking Info
Field
Type
Description
phoneNotVerticalBooleanPhone position is not correct.
tooCloseBooleanPhone is too close.
tooFarBooleanPhone is too far.
facehIntegerIf faceh === 0, user is not moving his head or moving his phone
facewIntegerIf facew === 0, user is not moving his head or moving his phone
livenessHigh.stillFaceBooleanUser is not moving his head.
livenessHigh.movingPhoneBooleanUser is not moving his phone.
livenessHigh.positionInfoStringInstructions to the user.

Instructions from livenessHigh.positionInfo are for example :

Enumeration
Description
TRACKER_POSITION_INFO_MOVE_BACK_INTO_FRAMENo head detected.
TRACKER_POSITION_INFO_STAND_STILLStand still.
TRACKER_POSITION_INFO_CENTER_MOVE_BACKWARDSMove away from the camera.
TRACKER_POSITION_INFO_CENTER_MOVE_FORWARDSMove closer to the camera.

For more information, please consult the demo on gitHub : https://github.com/idemia/WebCaptureSDK

Example Response

This snippet for the returned face capture client lets you start and stop the face capture on a given video stream, catch face tracking info, manage challenges, and handle errors:

JavaScript
1const faceCaptureClient = await BioserverVideo.initFaceCaptureClient(faceCaptureOptions);
2
3// faceCaptureClient = {
4
5// start : function(stream: MediaStream),
6
7// cancel : function()
8
9// }
Start Face Capture

The start JavaScript function of the capture client allows the ability to start the capture by establishing a peer-to-peer communication between the client (browser) and the WebCapture server.

Example Response

The start snippet starts the face capture process with the client and the server:

JavaScript
1// start face capture (ex: when user click on capture button)
2
3faceCaptureClient.start(videoStream);
Example Response

This function has no return type.

Calling this function starts the webrtc and web socket communication. In case of an error, the event is captured and processed, an appropriate message is displayed, and the capture is stopped.

ID&V Web Service Calls

This section is a short description of ID&V web services used in the face and liveness detection.

Details about the ID&V web service calls are available in the Using ID&V for Face Liveness Detection Guide.

The variables used in the request URLs are:

Variable
Meaning
URL_MAIN_PARTThe ID&V domain.
APIKEY_VALUEClient application API key as provided by portal administrator(s).
IDENTITY_IDThe value obtained from the IDENTITY_ID request. This should be the id value from the Create Identity response message.
Create an Identity

This web service call creates an identity ID that will be used to identify the current transaction in other requests.

Sample Request

This request initiates the verification process with ID&V as shown in the snippet:

Shell
1curl -X POST https://[URL_MAIN_PART]/gips/v1/identities \
2 -H 'Content-Type: application' \
3 -H 'apikey: [APIKEY_VALUE]'
Sample Response

When the request is sent, the ID&V response contains an id field as shown in the snippet:

Note: The value of that field replaces IDENTITY_ID in subsequent requests.

JSON
1{
2 "id": "d4eee197-69e9-43a9-be07-16cc600d04e8",
3 "status": "EXPECTING_INPUT",
4 "levelOfAssurance": "LOA0",
5 "creationDateTime": "2018-11-20T13:41:00.869",
6 "evaluationDateTime": "2018-11-20T13:41:00.883",
7 "upgradePaths": {
8 // ...
9 }
10}
Parameters

The parameters used are described in the table. Details about the parameters description are available in the Javascript API section.

Variable
Description
idThe identity ID that will be used to identify the current transaction in other requests
statusStatus of the transaction
levelOfAssurance (LOA)Level of trust of the current identity
creationDateTimeIdentity creation date
evaluationDateTimeLast date on which the identity was evaluated
upgradePathsList of possible submissions that would increase LOA

Consent is a notification from the client application to ID&V that the user consents to sharing their personal information (the portrait image and biometrics) being processed by ID&V for a given period.

Example Request

In this request, the client application notifies ID&V that the user has consented to ID&V using biometric matching as shown in the snippet:

Shell
1curl -X POST \
2 https:// [URL_MAIN_PART]/gips/v1/identities/[IDENTITY_ID]/consents \
3 -H 'Content-Type: application/json' \
4 -H 'apikey: [APIKEY_VALUE]' \
5 -d '[{
6 "approved": true,
7 "type": "PORTRAIT"
8}]'
Example Response

This response sends the consentId and approval as shown in the snippet:

JSON
1{
2 "consentId": "05248dc7-5687-4a95-a127-514829e9b68c",
3 "approved": true,
4 "type": "GIV",
5 "validityPeriod": {
6 "to": "2019-11-13"
7 }
8}
Parameters

The parameters used are described in the table. Details about the parameters description are available in the Javascript API section.

Variable
Description
consentIdThe consent ID that might be used to identify the submitted consent.
approvedBoolean indicating status of the consent (true/false).
typeType of consent submitted (possible values may be: PORTRAIT, GIV). The enumerated value can be found under the section API Docs in the Portal.
validityPeriodThe period for which the consent is considered valid.
toThe date at which the consent will expire and will not be considered valid anymore.
Start a Live Capture Session

With the live-capture-video-session request, the client application starts a live capture video session of the person in order to capture the best quality image that will be compared with a portrait extracted from an evidence reference (a VERIFIED identity document).

This web service call is done in synchronous mode. Upon ID&V receipt, this request, a Biometric service session, will be created. ID&V will provide, in the response, a Biometric service session identifier that will be used by the service provider for initializing the video stream between the browser and the Biometric service.

Example Request

The live-capture-video-session request to start a live capture video session is shown in the snippet:

Shell
1curl -X POST \
2 https://[URL_MAIN_PART]/gips/v1/identities/[IDENTITY_ID]/attributes/portrait/live-capture-video-session \
3 -H 'Content-Type: multipart/form-data' \
4 -H 'apikey: [APIKEY_VALUE]'
Example Response

The response from the live-capture-video-session request is shown in the snippet:

JSON
1{
2 "status": "PROCESSING",
3 "type": "PORTRAIT",
4 "id": "2d5e81c6-a600-47ed-aa22-2101b940fed6",
5 "sessionId": "891a6728-1ac4-11e7-93ae-92361f002671"
6}
Parameters

The parameters used are described in the table. Details about the parameters description are available in the Javascript API section.

Variable
Description
idThe user portrait identifier that will be used in future requests.
statusStatus of the portrait.
sessionIdThe Biometric Service session identifier related to the same ID&V identity.

Check Status of the Portrait

With this request, the client application checks the status of the submitted portrait.

Ask for Face and Liveness Detection Result

The client application can use this API to implement polling and go to the next steps only when being certain the portrait’s status is VERIFIED or prompt the user to retry with another portrait capture.

Example Request

The live-capture-video-session request to start a live capture video session is shown in the snippet:

Shell
1curl -X GET \
2 https://[URL_MAIN_PART]/gips/v1/identities/[IDENTITY_ID]/status/[PORTRAIT_ID] \
3 -H 'apikey: [APIKEY_VALUE]'
Parameters

The parameters used are described in the table. Details about the parameters description are available in the Javascript API section.

Variable
Description
URL_MAIN_PARTThe ID&V domain.
APIKEY_VALUEClient application API key as provided by your administrator(s).
IDENTITY_IDValue obtained after performing Step 1. This value should be the id value from the Create Identity response message.
PORTRAIT_IDValue obtained after performing Step 6. The content of this value should be taken from the id value of the Evaluate a Portrait response message. The client application can use this API to implement polling and go to next steps only when certain that the portrait's status is VERIFIED, otherwise it will prompt the user to retry with another portrait capture.
Example Response

The live-capture-video-session request to start a live capture video session is shown in the snippet:

JSON
1{
2 "status": "INVALID",
3 "type": "PORTRAIT",
4 "id": "97d8354e-7297-4eba-be39-1569d4c6342b"
5}
Parameters

The parameters used are described in the table. Details about the parameters description are available in the Javascript API section.

Variable
Description
idThe portrait's ID.
typeType of the evidence (here PORTRAIT).
statusStatus of the portrait processing.

Values for status can be:

  • VERIFIED - means that document/face has successfully been verified. When VERIFIED, a Document/Face is scored on a scale of 1 to 4.

    • LEVEL1: low confidence
    • LEVEL2: medium confidence
    • LEVEL3: high confidence
    • LEVEL4: very high confidence
  • INVALID - means that the document/face is considered invalid after the checks performed

  • NOT_VERIFIED - means that the document/face was processed, but not enough checks were performed to take a decision, most of the time due to bad quality of the image, or an unsupported document type

  • PROCESSING - means that the evidence is currently being processed by the service

  • ADJUDICATION - means that the evidence is currently reviewed by a human expert

Get Portrait Capture

This retrieves the portrait image capture for this identity.

Example Request

The request to retrieve the portrait image capture is shown in the snippet:

Shell
1curl -X POST https://[URL_MAIN_PART]/gips/v1/identities/attributes/portrait/capture \
2 -H 'Content-Type: application' \
3 -H 'apikey: [APIKEY_VALUE]'

When this request is sent, the ID&V response is multi-parts data with image binary content.

Example Response

The response for the portrait image capture is shown in the snippet:

Script
1--1b817195-cbe4-485f-90fd-4ed6f27f54a8--
2Content-Disposition: form-data; name="Portrait"
3Content-Type: application/octet-stream
4...
5...
6--1b817195-cbe4-485f-90fd-4ed6f27f54a8--

In order to see the included display image, the response must be updated.

  • At the beginning of the response, delete the multipart header:
Script
1--1b817195-cbe4-485f-90fd-4ed6f27f54a8--
2Content-Disposition: form-data; name="Portrait"
3Content-Type: application/octet-stream
  • At the end of the response, delete the multi-part footer:
Script
1--1b817195-cbe4-485f-90fd-4ed6f27f54a8--
  • Save the modifications brought and the open response with an html image element:
HTML
1<img src="..." alt="success" />

JavaScript API 

This section discusses the JavaScript API.

EnvironmentDetection 

This section discusses detecting and managing various environments.

BioserverEnvironment.detection 

This function detects if the current environment (OS/browser) is supported. If the environment is not supported, the response contains a list of supported browsers according to the current OS (parameter supportedBrowser).

JavaScript
1BioserverEnvironment.detection()

Response

The parameters used are described in the table. Details about the parameters description are available in the Javascript API section.

Field
Type
Description
envDetectedObjectObject that contains the result of the environment detection
envDetected.osObjectObject that contains the result the OS support
envDetected.os.isSupportedBooleanBoolean indicating if the OS is supported (true if supported)
envDetected.os.supportedListString[]The list of supported OS, if the OS is not supported
envDetected.os.isMobileBooleanBoolean indicating if the OS is a Mobile (true if the OS is a mobile)
envDetected.browserObjectObject that contains the result the browser support
envDetected.browser.isSupportedBooleanBoolean indicating if the OS is supported (true if supported)
envDetected.browser.supportedListObject[]The list of supported browsers according to the current OS if the browser is not supported
envDetected.browser.supportedList[i].nameStringBrowser name supported
envDetected.browser.supportedList[i].minimumVersion.StringMinimun version of the browser supported
envDetected.messageStringMessage if current environment is not supported

Usage Example

A detection request for BioserverEnvironment.detection to verify both the OS and browser are supported is shown in the snippet:

JavaScript
1// request if current environment (OS/browser) is supported
2
3var env = BioserverEnvironment.detection();
4
5if (!env.envDetected) { console.log("env detection failed with error:" + env.message); return }
6
7var envOS = env.envDetected.os;
8
9if (!envOs.isSupported) { console.log("env detection error: `, env.message , `Supported OS list`, envOs.supportedList); return }
10
11var envBrowser = env.envDetected.browser;
12
13if (!envBrowser.isSupported) { console.log(`env detection error: `, env.message, `Supported Browsers`, envBrowser.supportedList); return }

Example Success Response

A success response for BioserverEnvironment.detection that verifies both the OS and browser are supported is shown in the snippet:

JSON
1{
2 "envDetected": {
3 "os": {
4 "isSupported": true,
5 "supportedList": [],
6 "isMobile": false
7 },
8 "browser": {
9 "isSupported": true,
10 "supportedList": []
11 }
12},
13"message": ""
14}

Example Error Response

A success response for BioserverEnvironment.detection that verifies the OS is supported and the browser is not supported is shown in the snippet:

JSON
1{
2 "envDetected": {
3 "os": {
4 "isSupported": true,
5 "supportedList": [],
6 },
7 "browser": {
8 "isSupported": false,
9 "supportedList": [
10 {
11 "name": "Chrome",
12 "minimumVersion": "56"
13 },
14 {
15 "name": "Firefox",
16 "minimumVersion": "50"
17 },
18 {
19 "name": "Opera",
20 "minimumVersion": "47"
21 },
22 {
23 "name": "Edge",
24 "minimumVersion": "17"
25 },
26 {
27 "name": "HuaweiBrowser",
28 "minimumVersion": "12"
29 }
30 ]
31}
32},
33"message": "You seem to be using an unsupported browser."
34}

The previous JSON response is an example of what WebBioServer could return. In order to have the exact requirement, please consult Requirements.

NetworkCheck 

This section discusses how to check that the user's network connectivity is good enough to perform video functions.

connectivityMeasure

If the user's network connection does not meet latency and speed specifications, the video capture will fail. The connectivityMeasure API checks whether the user's network connection is adequate to proceed. If any of the verifications fails, the API returns an error message.

Verifications are performed in this order:

  • Latency: Verifies that the latency is within range. If so, the API proceeds to perform the next check; if not, it returns a latency failure without checking the upload speeds.

  • Upload speed: Verifies that the upload speed is fast enough. If so, it returns the results; if not, it returns an upload failure.

JavaScript
1BioserverNetworkCheck.connectivityMeasure({
2 uploadURL: urlBasePath + '/network-speed',
3 latencyURL: urlBasePath + '/network-latency',
4 onNetworkCheckUpdate: onNetworkCheckUpdate
5 errorFn: console.log('Failed to check user connectivity requirements');
6 })
Parameters

The parameters used are described in the table. Details about the parameters description are available in the Javascript API section.

Field
Type
Description
latencyURLStringURL that will be used for latency check.
downloadURLStringnot used - deprecated
uploadURLStringURL that will be used for upload check.
onNetworkCheckUpdateFunctionCallback function fired with check results.
errorFnFunction(Optional) The callback to handle error. If the callback is not provided, the onNetworkCheckUpdate will be called after the timeout.
Usage example

The onNetworkCheckUpdate request to check network connectivity results is shown in the snippet:

JavaScript
1// call it once document loaded
2window.onload = () => {
3 function onNetworkCheckUpdate(networkCheckResults) {
4 console.log({networkCheckResults});
5 if (!networkCheckResults.goodConnectivity) {
6 console.log('BAD user connectivity');
7 if (networkCheckResults.upload) {
8 console.log('Upload requirements not reached');
9 console.log('Upload speed threshold is ' + BioserverNetworkCheck.UPLOAD_SPEED_THRESHOLD);
10 } else if (networkCheckResults.latencyMs) {
11 console.log('Latency requirements not reached');
12 console.log('Latency speed threshold is ' + BioserverNetworkCheck.LATENCY_SPEED_THRESHOLD);
13 } else {
14 console.log('Failed to check user connectivity requirements');
15 }
16
17 // STOP user process and display error message
18 }
19 }
20 const urlBasePath = '/demo-server';
21 BioserverNetworkCheck.connectivityMeasure({
22 uploadURL: urlBasePath + '/network-speed',
23 latencyURL: urlBasePath + '/network-latency',
24 onNetworkCheckUpdate: onNetworkCheckUpdate
25 errorFn: (e) => {
26 console.error('An error occurred while calling connectivityMeasure: ', e);
27 }
28 });
29}
Example Responses

If the NetworkCheckUpdate was successfully, then the 200 success code will be returned with the following parameters.

Success Response Parameters

The table shows the parameters returned if the request is successful.

Field
Type
Description
goodConnectivityBooleanThe value false if connectivity requirements are not reached
latencyMsNumberThe value of current latency in milliseconds.
uploadNumberThe value of current upload speed (Kbits/s).
Result of onNetworkCheckUpdate with good connectivity

A true response for goodConnectivity is shown in the snippet:

JSON
1// onNetworkCheckUpdate will be called with the result below:
2{
3 "goodConnectivity": true,
4 "latencyMs": 44,
5 "upload": 5391,
6}
Result onNetworkCheckUpdate with bad connectivity

A false response for goodConnectivity is shown in the snippet:

JSON
1// onNetworkCheckUpdate will be called with the result below:
2{
3 "goodConnectivity": false,
4 "latencyMs": 44,
5 "upload": 0, // upload speed check not done
6}

UIExtensions

This section describes working with high liveness graphics.

High Liveness : resetLivenessHighGraphics

This function resets the Join the dots challenge graphics.

Example Usage With Custom Graphic Options

Graphic options for the onStartCaptureClick function are shown in the snippet:

JavaScript
1BioserverVideoUI.resetLivenessHighGraphics();
JavaScript
1function onStartCaptureClick() {
2 // change color of challenge Points
3 // and enable tooltip option
4 const graphicOptions = {
5 tooltip: {
6 enabled: true,
7 backgroundColor:"DarkTurquoise",
8 text: 'Move the line gently with your head to this point',
9 duration: '4' //toggle tooltip for 4 seconds or use 0 to disable toggling
10 },
11 controlledPoint: {radius: 40,color: "blue", borderSize: "3",borderColor: "white"}
12 challengePoint: {
13 "done": {"color": "OrangeRed"},
14 "target": {"color": "DarkTurquoise"}
15 },
16 challengeLines: {
17 "done": {"color": "OrangeRed", "dashed": false},
18 "target": {"color": "DarkTurquoise"}
19 },
20 }
21 BioserverVideoUI.resetLivenessHighGraphics(graphicOptions);
22}
Parameters

The parameters used are described in the table. Details about the parameters description are available in the Javascript API section.

Field
Type
Description
tooltip (optional)ObjectGraphic options to show tooltips near challenge points (tooltips contain user instructions)
tooltip.enabled (optional)BooleanEnables showing tooltips on challenge points; Default value: false
tooltip.backgroundColor (optional)StringTooltip background color; Default value: #ff6700
tooltip.width (optional)StringTooltip width. Default value: 200px
tooltip.fontSize (optional)StringTooltip font size; Default value: 0.8em
tooltip.fontColor (optional)StringTooltip text color; Default value: white
tooltip.duration (optional)StringToggles the tooltip using the given duration in seconds. (eg: show it for 4s hide it for 4s) Default value: 4.
tooltip.text (optional)StringTooltip text (user instructions). Default value: Move the line gently with your head to this point.
controlledPoint (optional)ObjectGraphic options for the starting point by user face movement.
controlledPoint.radius (optional)stringRadius of the starting point. Default value: 40.
controlledPoint.color (optional)stringBackground color of the starting point. Default value: black.
controlledPoint.borderSize (optional)stringBorder size of the starting point. Default value: 3.
controlledPoint.borderColor (optional)stringBorder color of the starting point. Default value: white.
challengePoint (optional)ObjectChallenge points graphic options.
challengePoint.done (optional)ObjectGraphics of done challenge points.
challengePoint.done.color (optional)StringThe background color of the challenge point. Default value: Lavender.
challengePoint.done.borderSize (optional)StringBorder size of the challenge point. Default value: 3.
challengePoint.done.borderColor (optional)StringBorder color of the challenge point. Default value: white.
challengePoint.done.textColor (optional)StringChallenge number text color. Default value: white.
challengePoint.done.textFont (optional)StringChallenge number text font. Default value: Helvetica.
challengePoint.done.dashed (optional)StringWhether or not the challenge point border is dashed. Default value: false. Allowed values: false, number.
challengePoint.target (optional)ObjectGraphics of a targeted challenge point.
challengePoint.target.color (optional)StringThe background color of the challenge point. Default value: DarkOrchid.
challengePoint.target.borderSize (optional)StringBorder size of the challenge point. Default value: 3.
challengePoint.target.borderColor (optional)StringBorder color of the challenge point. Default value: white.
challengePoint.target.textColor (optional)StringChallenge number text color. Default value: white.
challengePoint.target.textFont (optional)StringChallenge number text font. Default value: Helvetica.
challengePoint.target.dashed (optional)StringWhether or not the challenge point border is dashed. Default value: false. Allowed values: false, number.
challengeLines (optional)ObjectChallenge lines graphic options.
challengeLines.done (optional)ObjectGraphics of lines connecting done-challenge points.
challengeLines.done.color (optional)StringThe background color of the challenge point. Default value: Lavender.
challengeLines.done.size (optional)StringBorder size of the challenge point. Default value: 5.
challengeLines.done.dashed (optional)StringString Whether or not the line will be dashed. Default value: 10. Allowed values: false, number.
challengeLines.target (optional)ObjectGraphics of lines connecting last-done challenge point with the starting circle.
challengeLines.target.color (optional)StringThe background color of the challenge point. Default value: DarkOrchid.
challengeLines.target.size (optional)StringBorder size of the challenge point. Default value: 5.
challengeLines.target.dashed (optional)StringWhether or not the line will be dashed. Default value: 10. Allowed values: false, number.
High Liveness :updateLivenessHighGraphics

This function adds the Join the dots challenge graphics to the UI.

JavaScript
1BioserverVideoUI.updateLivenessHighGraphics('videoId', trackingData);
Parameters

The parameters used are described in the table. Details about the parameters description are available in the Javascript API section.

Field
Type
Description
videoElementIdStringThe ID of the video in which the user camera is displayed.
trackingDataObjectThe tracking data received from the tracking callback function.
Usage example

A sample success response is shown in the snippet:

1<!-- below the html sample before calling the UI lib -->
2<div class="wrapper">
3 <video id="videoId" playsinline autoplay></video>
4</div>
5<!-- below the html sample after calling the UI lib -->
6<!-- BioserverVideoUI.updateLivenessHighGraphics('videoId', trackingData) -->
7
8<div class="wrapper">
9 <div id="wbs-video-wrapper" style="position: relative;">
10 <video id="videoId" playsinline autoplay></video>
11 <div id="wbs-graphics-wrapper">
12 <div id="wbs-tooltip"></div>
13 <svg id="wbs-graphics-overlay" style="...">
14 <!-- (...) -->
15 </svg>
16 </div>
17 </div>
18</div>
Passive video Liveness : initPassiveVideoGraphics

This function initializes the passive video liveness graphics.

Example Usage
JavaScript
1BioserverVideoUI.initGraphics('video-player',{
2 oval: {
3 borderSize: 8,
4 borderColor: 'white',
5 animatedBorderColor: '#FFA000',
6 },
7 backgroundColor: 'rgba(21, 51, 112, 0.8)'
8})
Parameters

The parameters used are described in the table.

Field
Type
Description
videoElementStringIdentifier of HTML VideoElement that displays the user camera
graphicOptions (optional)ObjectGraphic options : css customization

Graphic options are :

Field
Type
Description
oval (optional)ObjectInformation about oval graphics
oval.borderSize (optional)BooleanBorder size of the oval. By default 8
oval.borderColor (optional)StringCSS color of the oval border. By default #FFFFFF
oval.animatedBorderColor (optional)StringCSS color of the animated oval border. By default #FFA000
backgroundColor (optional)StringCSS color for the background color outside the oval. By default rgba(21, 51, 112, 0.8)
Passive video Liveness : displayPassiveVideoAnimation

This function display the passive video liveness graphics.

Example Usage
JavaScript
1const faceCaptureOptions = {
2 ..
3 trackingFn: function(trackingInfo) {
4 ..
5 BioserverVideoUI.displayPassiveVideoAnimation(trackingInfo);
6 ..
7 },
8 ...
9}
10BioserverVideo.initFaceCaptureClient(faceCaptureOptions)
Parameters

The parameters used are described in the table.

Field
Type
Description
trackingInfoObjectObject trakcingInfo as sent by the server to the callback trackingFn()
Error
Field
Type
Description
errorObjectError object
error.messagestringError message. Example : "Failed to display animation"
Passive video Liveness : stopPassiveVideoAnimation

This function display the passive video liveness graphics.

Example Usage
JavaScript
1const faceCaptureOptions = {
2 ..
3 showChallengeResult: (result) => {
4 BioserverVideoUI.stopAnimation();
5 ..
6 },
7 errorFn: (error) => {
8 BioserverVideoUI.stopPassiveVideoAnimation();
9 ..
10 }
11}
12BioserverVideo.initFaceCaptureClient(faceCaptureOptions)
Error
Field
Type
Description
errorObjectError object
error.messagestringError message. Example : "Failed to stop animation"
Passive video Liveness : displayPassiveVideoBestImage

This function display the best image extracted from a passive video liveness.

Example Usage
JavaScript
1const faceCaptureOptions = {
2 ..
3 showChallengeResult: async (challengeResult) => {
4 const bestImgBlob = await requestBestImageFromBackend();
5 BioserverVideoUI.displayPassiveVideoBestImage(bestImgBlob, challengeResult, "best-image-wrapper", {
6 oval: {
7 borderSize: 5,
8 borderColor: "#41B16E"
9 },
10 })
11 ..
12 }
13 ..
14}
15BioserverVideo.initFaceCaptureClient(faceCaptureOptions)
Parameters

The parameters used are described in the table.

Field
Type
Description
bestImageblobBest image blob as retrieved by server
challengeResultobjectParameters passed to showChallengeResult callback
BestImageElementstringIdentifier of HTML best image Element that displays the best image
graphicOptionsobjectGraphic options : css customization

Graphic options are :

Field
Type
Description
oval (optional)ObjectInformation about oval graphics
oval.borderSize (optional)BooleanBorder size of the oval. By default 8
oval.borderColor (optional)StringCSS color of the oval border. By default #FFFFFF
Error
Field
Type
Description
errorObjectError object
error.messagestringError message. Example : "Failed to display animation"
Passive video Liveness : resetBestImage

This function display the passive video liveness graphics.

Example Usage
JavaScript
1BioserverVideoUI.resetBestImage();
Error
Field
Type
Description
errorObjectError object
error.messagestringError message. Example : "Failed to reset image"
Passive video Liveness : displayBestImage

This function display the best image extracted from a passive video liveness without any additional graphics.

Example Usage
JavaScript
1const faceCaptureOptions = {
2 ..
3 showChallengeResult: async (challengeResult) => {
4 const bestImgBlob = await requestBestImageFromBackend();
5 BioserverVideoUI.displayBestImage(bestImgBlob, challengeResult, "best-image-wrapper")
6 ..
7 }
8 ..
9}
10BioserverVideo.initFaceCaptureClient(faceCaptureOptions)
Parameters

The parameters used are described in the table.

Field
Type
Description
bestImageblobBest image blob as retrieved by server
challengeResultobjectParameters passed to showChallengeResult callback
BestImageElementstringIdentifier of HTML best image Element that displays the best image

Global Error Codes 

The table shows the global error codes for Biometric Services.

Code
Description
Component
Steps to reproduce
Matching HTTP error
1000Token Verification failed. Invalid tokenBiometric Services coreSent wrong token or missing token/liveness parameters (mode and number of challenge) in HTTP header with any request400 (Bad Request)
1001Unauthorized - invalid credentials/tokenBiometric Services coreSent bad credentials when starting bio-session401 (Unauthorized)
1100Biometric Services core unavailableBiometric Services coreStart bio-session and capture and finally stop Biometric Services core503 (Service Unavailable)
1101Biometric Services core timeoutBiometric Services coreUpdate the bioserver-video proxy timeout to 3s. For example, send image with rotation408 (timeout)
1102Resource not foundBiometric Services core/tdCreate bio-session from video-server, Restart Biometric Services core, Start capture face on video-server404
1200An error occurred while initializing the face trackerFace detectorStart video-server bio-session, Start capture (make sure that no face is recognized), Stop license server daemon while tracking503 (Service Unavailable)
1201An error occurred while tracking the faceFace detectorStart video-server bio-session, Start capture (make sure that no face is recognized), Stop license server daemon while tracking500
1300Video chat timeoutBiometric Services video chatStart capture (make sure that no face is recognized within 60s)408
1301Video Capture TimeOut: No face detected!Biometric Services videoFace-detector timeout without best image408
1302BioserverVideo(api) is not unavailableBiometric Services video chatStop bioserver-video, and start Biometric Services video-chat503 (Service Unavailable)
1303Poor video qualityBiometric Services video chatChange ENROLL_QUALITY_THRESHOLD to 10000404
1304No active video stream foundBiometric Services videoDo not allow device usage404
2000Internal ErrorALLCreate technical error server-side.500

FaceCapture 

This section discusses FaceCapture functionalities.

initMediaDevices (Deprecated)

Deprecated

getDeviceStream (Deprecated)

Deprecated

getMediaStream

This function requests access to the given camera devices and returns the associated MediaStream. This function prompts the user for permission to use the requested media.

Warning: Except for the back cameras of smartphones, the webcam-camera video and smartphones front-cameras video stream are inverted/flipped. Depending on the camera used, you may have to apply a CSS style transform:scale(-1,1) on the video wrapper element in order to create a mirror effect on the video stream.

Recommendation: Do not use optional parameters in order to have best setting within your web app.

JavaScript
1// HTML Code: <video id="my-video-player" autoplay></video>
2const videoStream = await BioserverVideo.getMediaStream({videoId: 'my-video-player', video:{deviceId:321}});
Example: Get Video Stream
JavaScript
1// Requests video stream from the default camera device
2// HTML Code: <video id="my-video-player" autoplay></video>
3const videoStream = await BioserverVideo.getMediaStream({videoId: 'my-video-player', video:{deviceId:321}});
4videoElement.srcObject = videoStream;
Parameters
Field
Type
Description
videoIdStringVideo identifier
video.deviceIdStringDevice identifier

Initialize Face Capture

This function initializes a face capture client with the given configuration. The returned client will let you start and stop the face capture on a given video stream, capture face-tracking info, manage challenges, and handle errors.

Recommendation: Do not use optional parameters in order to have best setting within your web app.

JavaScript
1BioserverVideo.initFaceCaptureClient();
Example Usage
JavaScript
1// Get a videoStream
2const videoStream = await BioserverVideo.getDeviceStream();
3// get liveness session id from backend
4const sessionId = await initLivenessSession();
5// init a face capture client
6const faceCaptureOptions = {
7 wspath: 'video-server/engine.io',
8 bioserverVideoUrl: '$URL-WBS',
9 rtcConfigurationPath: '$URL-WBS/video-server/coturnService?bioSessionId=' + encodeURIComponent(sessionId),
10 bioSessionId: sessionId,
11 trackingFn: (trackingInfo) =\ {console.log("onTracking", trackingInfo)},
12 errorFn: (error) =\ {console.log("face capture error", error)},recordedVideoFile)},
13 showChallengeInstruction: (challengeInstruction) =\ {console.log("challenge
14 instructions", challengeInstruction)},
15 showChallengeResult: () =\ { console.log("call back the backend to retrieve
16 liveness result"); }
17};
18
19const faceCaptureClient = await BioserverVideo.initFaceCaptureClient(faceCaptureOptions);
Parameters
Field
Type
Description
rtcConfigurationPathStringendpoint from Bioserver (no API key protected) to retrieve credentials of TURN server in order to use webRTC functionality.
bioserverVideoUrlStringBioserver video url.
wspath (optional)StringThe websocket url used to communicate with server via websockets. Example: /video-server/engine.io.
bioSessionIdStringThe bio-session ID in which the user images will be temporarily stored during the capture process.
trackingFnFunctionThe callback that handles the face tracking information per frame. This callback is fired on each video frame with face tracking information. The tracking information contains:
* The user face position on this frame
facex: position of face on x axis
facey: position of face on y axis
facew: width of the face
faceh: height of the face
* Distance instruction: If the distance is true, an instruction is given to the user in order to move closer to the capture device. Customizable message (Ex: Please move closer)
* An instruction given to the user in order to help get the best image. It could be one of:
TRACKER_POSITION_INFO_CENTER_TURN_LEFT TRACKER_POSITION_INFO_MOVING_TOO_FAST TRACKER_POSITION_INFO_MOVE_BACK_INTO_FRAME TRACKER_POSITION_INFO_CENTER_ROTATE_DOWN TRACKER_POSITION_INFO_CENTER_TURN_RIGHT
* The LIVENESS_HIGH Challenge position if it was requested livenessHigh.phoneNotVertical: true if the user's phone is not positioned vertically
livenessHigh.stillFace: true if the user face is not moving for a moment
livenessHigh.movingPhone: [only on some browsers] true if the user is moving their phone (we expect the user to keep phone still) livenessHigh.startingPoint: position of the starting point livenessHigh.startingPoint.x: position of the starting point x axis
livenessHigh.startingPoint.y: position of the starting point y axis
livenessHigh.controlledPoint: position of the controlled point
livenessHigh.controlledPoint.x: position of the controlled point x axis
livenessHigh.controlledPoint.y: position of the controlled point y axis
livenessHigh.targetOnHover: true if the controlled point is on the target point (user is hitting the point) livenessHigh.challengeCircles: array of all destination challenges positions
livenessHigh.challengeCircles[i].x: position of the challenge circle x axis
livenessHigh.challengeCircles[i].y: position of the challenge circle y axis
livenessHigh.challengeCircles[i].r: radius of the challenge circle
livenessHigh.targetChallengeIndex: determines which challenge circle from the array above is the target. The width and height of the current frame.
showChallengeInstructionFunctionThe callback to handle challenge instructions. This callback function is fired only if the liveness check (LIVENESS_HIGH set by integrator) is requested and only when the user face is detected.
showChallengeResultFunctionThis callback is fired once the challenge is done. The results have to be requested by the Service Provider (SP).
errorFnFunctionThe callback to handle video capture errors. This callback function is fired when an error happens during the capture process.

Start Face Capture

This example shows an autocapture of a FACE selfie without any liveness verification.

JavaScript
1const faceCaptureClient = await BioserverVideo.initFaceCaptureClient(faceCaptureOptions);
2// faceCaptureClient == {
3// start : function(stream: MediaStream),
4// close : function()
5// }
6
7// start face capture (ex: when user click on capture button)
8faceCaptureClient.start(videoStream);
9
10// stop face capture (ex: when user click on stop capture button)
11faceCaptureClient.cancel();

User access blocking

After too many liveness incorrect attempts, the liveness service is disabled for a given period to the user. Goal is to limit liveness spoofing attempts. In this case, the server will return a status code ‘429’.

HTTP
1{
2 code: 429,
3 error: "Maximum captures attempt reached",
4 unlockDateTime: "2021-01-14T14:30:05.643Z"
5}

This response can be returned by server on 2 calls from client: initFaceCaptureClient and start call from client. The initFaceCaptureClient is now creating the connection with the back end and send user information for validation. Call can take a bit longer than before, for proper integration, add a loading page on the UX (See our sample-app integration).

Here is an example of the client integration of FP functionality.

Sample code:

JavaScript
1const faceCaptureOptions = {
2 wspath: wspath,
3 bioserverVideoUrl: bioserverVideoUrl,
4 rtcConfigurationPath: rtcConfigurationPath,
5 showChallengeInstruction: (challengeInstruction) => {
6 // custom code
7 },
8 showChallengeResult: async () => { // custom code },
9 trackingFn: () => { // custom code },
10 errorFn: (error) => {
11 if (error.code && error.code === 429) { // user is blocked
12 // we reset the session when we finished the liveness check real session
13 resetLivenessDesign();
14 document.querySelectorAll('.step').forEach((step) => step.classList.add('d-none'));
15
16 // the lock counter is displayed to the user
17 userBlockInterval(new Date(error.unlockDateTime));
18 document.querySelector('#step-liveness-fp-block').classList.remove('d-none');
19 }
20 // custom code
21 }
22client = await BioserverVideo.initFaceCaptureClient(faceCaptureOptions);
23client.start(stream);
24// both of previous call can raise the 429 error message

trackingFN() without challenge

JSON
1// Example of response containing user face tracking information
2 {
3 "facex": 217.82150268554688,
4 "facey": 175.0970458984375,
5 "facew": 218.2180938720703,
6 "faceh": 218.2180938720703,
7 "positionInfo": "TRACKER_POSITION_INFO_MOVING_TOO_FAST",
8 "distance": true, // User face is too far = display "Move closer" message
9 "w": 1280,
10 "h": 720,
11 "timestamp": 1536335057
12}

tracking() WITH LIVENESS_HIGH mode

JavaScript
1// Example of response containing face-tracking information when the LIVENESS_HIGH challenge is requested
2{
3 "faceh": 275.3572082519531,
4 "facew": 275.3572082519531,
5 "facex": 143.19139099121094,
6 "facey": 128.05934143066406,
7 "w": 1280,
8 "h": 720,
9 "timestamp": 1549893651,
10 "distance": true, // User face is too far = display "Move closer" message
11 "livenessHigh": {
12 "controlledPoint": {"x": 299,"y": 236},
13 "targetChallengeIndex": 2,
14 "challengeCircles": {
15 "0": {"x": 199,"y": 97,"r": 91},
16 "1": {"x": 344,"y": 291,"r": 91},
17 "2": {"x": 99,"y": 296,"r": 91},
18 "3": {"x": 536,"y": 247,"r": 91}
19 }
20 },
21
22}

showChallengeInstruction()

JavaScript
1// Example of response containing the challenge to display to the user
2// if LIVENESS_HIGH mode is requested:
3 "TRACKER_CHALLENGE_2D" : video channel and challenge are starting, end user shall move its face
4// if challenge is finished:
5 "TRACKER_CHALLENGE_PENDING" : video channel is established

errorFn()

The error response handled by the callback errorFn(), if defined, otherwise by an exception with a JSON format. For example:

JSON
1// See the table in `Global Error Codes` section for more details
2{
3 "code": "1031",
4 "error": "Video Capture TimeOut: No face detected!"
5 }

See the table in Global Error Codes section for more details.

REST API 

This section describes two kinds of REST APIs:

  • Biometric WebCapture Rest API
  • Biometric Services Rest API

Biometric WebCapture Rest API 

initLivenessSession

This function creates a new session with the liveness parameters of the challenge as shown in the snippet:

HTTP
1POST /init-liveness-session

Headers

The table shows the header values for initLivenessSession to create a new session.

Name
Type
Description
apikeyThis header will contain the APIKEY value provided to the service provider
Content-Typeapplication/json

Parameters

The table shows the parameters for initLivenessSessionthe to create a new session.

Name
Type
Description
livenessModeStringThe type of liveness to be applied during a liveness challenge session. Allowed values: NO_LIVENESS, LIVENESS_HIGH,PASSIVE_LIVENESS For PASSIVE_LIVENESS nothing is required from the user. This is a similar experience as when autocapturing from a selfie. LIVENESS_MEDIUM is deprecated. With LIVENESS_HIGH, which is an active liveness, the user needs to move their head with specific head rotation driven by the back end.
numberOfChallenge (optional)NumberDeprecated
securityLevel (optional)StringThe security level applied on fraud detection. The higher the level, the stricter the fraud verification. Allowed values: LOW, MEDIUM,HIGH. Recommendation: This value shall be set to HIGH for all liveness modes. By default value is set to HIGH
imageStorageEnabled (optional)BooleanDeprecated
correlationId (optional)StringCustom identifier provided by the service provider (could be Service Provider (SP) transaction id).
callbackURL (optional)URLThe URL used to notify the service provider that liveness check results are available.
ttlSeconds (optional)NumberDeprecated
Request Example 1

The initLivenessSession request create a new session shown in the snippet:

1apikey: c87f4339-97ca-11c4-9bfd-7ccd673abc58 (if api key enabled)
2Content-Type: application/json
Request Example 2

The initLivenessSession request create a new session shown in the snippet:

1apikey: c87f4339-97ca-11c4-9bfd-7ccd673abc58 (if api key enabled)
2Content-Type: application/json
Success Response 201

If the initLivenessSession request is successful, then the success 201 status code will be returned with the Location string as shown in the table.

Name
Type
Description
LocationStringHeader containing the URI of the created bio-session.
Returned Location String

The returned Location string is shown in the snippet:

HTTP
1HTTP/1.1 201 Created
2Location: /v2/bio-sessions/0991cedc-9111-4b9d-9e4e-8d6eb4db488f
Error Response
Returned Error Code 4xx

Below are the 400 status codes and descriptions that will be returned if the initLivenessSession request generates an error.

Name
Description
400Something is wrong with the request
401Authentication is required
403Missing permissions to create the bio-session

Biometric Services Rest API 

getLivenessChallengeResult

This API retrieves the face and liveness detection result as shown in the header snippet:

Shell
1curl -X GET \
2 https://[URL_MAIN_PART]/bioserver-app/v2/bio-sessions/{bioSessionId}/liveness-challenge-result/{livenessMode} \
3 -H 'apikey: [APIKEY_VALUE]' \

Warning: The service used in this part is located on the Biometric Services Rest API, so you must be careful about the URL that you use.

The table shows the header parameters for the getLivenessChallengeResult function.

Field
Description
URL_MAIN_PARTThe domain of the Biometric Service for face coding and matching.
APIKEY_VALUEThe client application API key as provided by portal administrator(s).
URI

The table shows the URI parameters for the getLivenessChallengeResult function.

Field
Type
Description
bioSessionIdStringThe identifier of the bio-session that contain livenessParameter.
livenessModeStringThe liveness mode set by the integrator (to avoid any fraud).
numberOfChallengeIntegerThe number of challenges for liveness high (to avoid any fraud).
securityLevelIntegerThe tracker security level expected by the integrator (fraud detection).
Success Response
Success Code 200

If the getLivenessChallengeResult request is successful then the success 200 status code will be returned with the values shown in the table.

Field
Type
Description
livenessStatusStringStatus of liveness challenge result. Allowed values: SUCCESS, FAILED, SPOOF, ERROR, TIMEOUT
diagnostic (optional)StringDiagnostic in case of liveness failure.
bestImageId (optional)StringThe ID of the stored best-image in the session.
fakeWebcam (optional)booleanDeprecated
livenessModeStringThe liveness mode used during face capture. Allowed values: NO_LIVENESS, LIVENESS_PASSIVE, LIVENESS_PASSIVE_VIDEO, LIVENESS_HIGH.
Example Success Response

The success response is shown in the snippet:

JSON
1{
2 "livenessStatus": "SUCCESS",
3 "bestImageId": "5597f426-3863-4fa1-b4ff-76a957913f39",
4 "livenessMode": "LIVENESS_HIGH",
5 "numberOfChallenge": 2,
6 "signature": "eyJhbGciOiJSUzI1NiJ9.ew0KogImFhMGJkNmNhL&hellip;ogIClbmRseU5hbWUoroAE_oxDF_ZtH-E"
7}
Returned Error Code 4xx

Below are the 400 status codes and descriptions that will be returned if the initLivenessSession request generates an error.

Name
Description
400Something is wrong with the request
401Authentication is required
403Forbidden
404Unable to find a bio-session for the given identifier

getFaceImage

This function retrieves the image that has been used to create a face resource. This is only possible if the image storage has been enabled for the bio-session as shown in the snippet:

Shell
1curl -X GET \
2 https://[URL_MAIN_PART]/bioserver-app/v2/bio-sessions/{bioSessionId}/faces/{faceId}/image?compression=true \
3 -H 'apikey: [APIKEY_VALUE]' \

Warning: The service used in this part is located on the Biometric Services Rest API. You have to be careful about the URL you use.

Header

The table shows the header values for getFaceImage used to create a face resource.

Field
Description
URL_MAIN_PARTThe domain of the Biometric Service for face coding and matching.
APIKEY_VALUEClient application API key as provided by portal administrator(s).

URI

Field
Type
Description
bioSessionIdstringThe identifier of the bio-session containing the face.
faceIdstringThe identifier of the face resource for which the image needs to be retrieved.
compression (optional)BooleanTo enable image jpeg compression. Default value: false

Response (example):

HTTP
1HTTP/1.1 200 OK
2Content-Type: image/jpeg
3(image)

Success 200

Name
Description
200The image has been successfully retrieved.

Error 204

Name
Description
204Storage is not enabled for the bio-session.

Error 4xx

Name
Description
400Something is wrong with the request
401Authentication is required
403Missing permissions to retrieve the face image
404Unable to find a bio-session or a face for the given identifier

getMatches

The getMatches function retrieves a list of ordered matches (best scores come first) for a given face.

The reference face is compared to the captured face created in the bio-session.

The result of each comparison is called a “match”. Each match is composed of the reference face, a candidate face, a matching score, and a false acceptance rate.

Warning: The service used in this part is located on the Biometric Services Rest API. You have to be careful about the URL you use.

Shell
1curl -X GET \
2 https://[URL_MAIN_PART]/bioserver-app/v2/bio-sessions/{bioSessionId}/faces/{referenceFaceId}/matches \
3 -H 'apikey: [APIKEY_VALUE]' \
Header

The table shows the header parameters for the getMatches function.

Field
Description
URL_MAIN_PARTThe domain of the Biometric Service for face coding and matching.
APIKEY_VALUEClient application API key as provided by portal administrator(s).
URI

The table shows the URI parameters for the getMatches function.

Field
Type
Description
bioSessionIdstringThe identifier of the bio-session containing the faces.
referenceFaceIdstringThe identifier of the reference face.

Success 200

Field
Type
Description
referenceobjectThe reference face.
candidateobjectA candidate face.
scorenumberThe matching score.
falseAcceptanceRatenumberThe false acceptance rate, or FAR: measure of the likelihood that the Biometric Services will incorrectly return a match when the faces do not actually belong to the same person. For instance, "100" means there is no chance the two faces belong to the same person, "0.000000000028650475" means there is almost no chance Biometric Services can be wrong.
correlationId (optional)stringA custom identifier coming from the caller and currently associated with the bio-session.
createddatetimeThe date on which the match has been created.
expiresdatetimeThe date after which the match will expire and will be removed from the server.
signature (optional)stringA digital signature (JWS) of the response. Authentication and integrity can be verified afterward using the Biometric Services public certificate.

Response (example):

JSON
1[{
2 "reference": {
3 "id": "aa0bd6ca-1206-415b-af94-8d2c18aa9c70",
4 "friendlyName": "Presidential portrait of Barack Obama",
5 "digest": "39bd0d9606a772b1e7076401f32f14bdde403b9608e789e0771b90fb79b664a4",
6 "mode": "F5_1_VID60",
7 "imageType": "SELFIE",
8 "quality": 295,
9 "landmarks": {
10 "eyes": {
11 "x1": 1191.4584,
12 "y1": 582.79565,
13 "x2": 1477.8955,
14 "y2": 580.3324
15 }
16 }
17 },
18 "candidate": {
19 "id": "6e1741f1-3715-416a-bfc6-4fc381d228a3",
20 "friendlyName": "Barack Obama's Columbia University Student ID",
21 "digest": "94d1b6ff2acf368c3e0ccaebe1d8e447ed1ccd7b596dc5cac3c13a4822b256c6",
22 "mode": "F5_1_VID60",
23 "imageType": "ID_DOCUMENT",
24 "quality": 186,
25 "landmarks": {
26 "eyes": {
27 "x1": 141.83296,
28 "y1": 217.47075,
29 "x2": 241.09653,
30 "y2": 216.0568
31 }
32 }
33 },
34 "score": 7771.43408203125,
35 "falseAcceptanceRate": 0.000000000028650475616752694,
36 "correlationId": "891a6728-1ac4-11e7-93ae-92361f002671",
37 "created": "2017-05-18T12:41:09.58Z",
38 "expires": "2017-05-18T12:42:00.844Z",
39 "signature": "eyJhbGciOiJSUzI1NiJ9.ew0KICAicm&hellip;0NCiAgICB9DQHSQfU7Q"
40}]

Error 4xx

Name
Description
400Something is wrong with the request
401Authentication is required
403Missing permissions to retrieve the matches
404Unable to find a bio-session or a face for the given identifier

Sample Face Capture 

This section describes a face capture sample.

SimpleClient - Face Capture Example 

This is an example of a simple client making a face capture using the video capture library.

Refer to the sample application for more details.

SimpleClient.html

HTML
1<!DOCTYPE html>
2<html lang="en">
3<head>
4 <meta charset="UTF-8">
5 <title>Simple Client</title>
6 <style>#video{width: 400px; border: 1px solid black;}</style>
7</head>
8<body>
9 <video id="video-output" autoplay playsinline style="transform: scaleX(1);"></video>
10 <br/>
11 <button id="capture">Capture face</button>
12 <button id="stop">stop Capture face</button>
13
14 <script src="$URL-WBS/video-server/bioserver-video-api.js"></script>
15 <script src="$URL-WBS/video-server/bioserver-environment-api.js"></script>
16 <script src="$URL-WBS/video-server/bioserver-network-check.js"></script>
17 <script src="$URL-WBS/video-server/bioserver-video-ui.js"></script>
18 <script src="SimpleClient.js"></script>
19</body>
20</html>

SimpleClient.js

JavaScript
1let client, videoStream;
2async function init(){
3 // get user camera video
4 // HTML Code: <video id="my-video-player" autoplay></video>
5const videoStream = await BioserverVideo.getMediaStream({videoId: 'my-video-player', video:{deviceId:321}});
6 // display the video stream
7 document.querySelector('#video-output').srcObject = videoStream;
8 // get liveness session id from backend
9 const sessionId = await initLivenessSession();
10 // initialize the face capture client with callbacks
11 const faceCaptureOptions = {
12 wspath: 'video-server/engine.io',
13 bioserverVideoUrl: '$URL-WBS',
14 rtcConfigurationPath: '$URL-WBS/video-server/coturnService?bioSessionId=' + encodeURIComponent(sessionId),
15 bioSessionId: sessionId,
16 trackingFn: (trackingInfo) => {console.log("tracking", trackingInfo)},
17 errorFn: (error) => {console.log("got error", face)},
18 showChallengeInstruction: (challengeInstruction) => {console.log("got challenge instruction", challengeInstruction)},
19 showChallengeResult: () => {console.log("got challenge result -> callBackend to fetch result")}
20 };
21 client = await BioserverVideo.initFaceCaptureClient(faceCaptureOptions);
22}
23document.querySelector('#capture').addEventListener('click', async () => {
24 if (client) client.start(videoStream);
25});
26document.querySelector('#stop').addEventListener('click', async () => {
27 if (client) client.cancel();
28});
29
30async function initLivenessSession () {
31 console.log('init liveness session');
32 return new Promise((resolve, reject) => {
33 const xhttp = new window.XMLHttpRequest();
34 let path = '$URL-INTEGRATOR-BACK-END/init-liveness-session/'; // please fill with your backend endpoint
35 xhttp.open('GET', path, true);
36 xhttp.responseType = 'json';
37 xhttp.onload = function () {
38 if (this.status >= 200 && this.status < 300) {
39 resolve(xhttp.response);
40 } else {
41 console.error('initLivenessSession failed');
42 reject();
43 }
44 };
45 xhttp.onerror = function () {
46 reject();
47 };
48 xhttp.send();
49 });
50}
51
52init();

FAQ 

Where can I find sample source code showing API integration? 

A demo app is available to showcase the integration of IDEMIA Web CaptureSDK for IDEMIA Identity offering.

Github repository: https://github.com/idemia/WebCaptureSDK
Section: Face autocapture with liveness detection

How to run sample source code from GitHub? 

  1. Install npm on your machine
Shell
1npm version
2{ npm: '5.6.0',
3 ares: '1.10.1-DEV',
4 cldr: '32.0',
5 http_parser: '2.8.0',
6 icu: '60.1',
7 modules: '57',
8 nghttp2: '1.25.0',
9 node: '8.11.1',
10 openssl: '1.0.2o',
11 tz: '2017c',
12 unicode: '10.0',
13 uv: '1.19.1',
14 v8: '6.2.414.50',
15 zlib: '1.2.11' }
  1. Download GitHub sources

  2. Update demo configuration : /server/config/defaults.js You have to point to the desired platform. By default you are calling a stagging platform.

Properties
1// Remote server to call
2BIOSERVER_CORE_URL: 'https://<host>:<port>',
3BIOSERVER_VIDEO_URL: 'https://<host>:<port>',
4WEB_SDK_LIVENESS_ID_DOC: 'YOUR_API_KEY',
5
6// Callback management
7DISABLE_CALLBACK: true, // Set this key to true to disable callback functionality
8SERVER_PUBLIC_ADDRESS: 'https://<host>:<port>',
9LIVENESS_RESULT_CALLBACK_PATH: '/<callback-service>',

You can also enable ID&V Demo integration (Not available at the moment. Coming soon)

Properties
1// ID&V Demo integration
2GIPS_URL: 'https://<host>:<port>/gips/rest',
3GIPS_RS_API_Key: 'YOUR_API_KEY',
4IDPROOFING: false, // Enable ID&V Demo integration : true or false
  1. Go to GitHub sources root and install npm (do it only once)
Shell
1npm i --verbose

Run the demo (to do each time you want start the demo)

Shell
1npm run start
  1. Go to https://localhost:9943/demo-server/

How to test sample source code from GitHub with an Android phone? 

  1. Run sample source code from GitHub on your local machine

  2. Setup your phone

Open a terminal, go to the installation folder and launch once:

Shell
1adb devices

This will start the 'adb' deamon once and display the status of the device connected.

Shell
1* daemon not running; starting now at tcp:5037
2* daemon started successfully
3List of devices attached
4XXXX128PX device

If you don't see your device :

  • try unplug / plug the USB cable
  • set proper USB mode
  • check if debugging option is enabled on device
  1. Redirect the mobile port to local machine port
Shell
1adb reverse tcp:[device port] tcp:[machine port]

Example :

Shell
1adb reverse tcp:9943 tcp:9943

This will forward all mobile connections on port 9943 to local machine port 9943. So if you open a browser with 'http://localhost:9943', all requests will be sent to your local server running on port 9943.

  1. Display phone screen on local machine launching the command:
Shell
1scrscpy

Now the device screen should be displayed on the local machine.

How to debug sample source code from GitHub with an Android phone ? 

  1. Follow procedure regarding how to test sample source code from GitHub with an Android phone.

  2. Open chrome on you local machine and go to : chrome://inspect/#devices

Click on "inspect"

Chrome inspection

If you have an issue, check port settings and target settings

Chrome inspection 2
Chrome inspection 3
  1. Open https://localhost:9943/demo-server/ on your smartphone chrome browser On you local machine, look at console traces (section Console). You are are also able to add break-points on section Sources.

How to generate a self-signed certificate ? 

Install openssl and execute:

Bash
1openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 3650 -subj '/CN=demo-server' -config openssl.cnf -extensions v3_req -nodes

Then import your private key and certificate into a PKCS#12 keystore file:

Bash
1openssl pkcs12 -export -out demo-server.p12 -inkey key.pem -in cert.pem

Note: This configuration is for development only. In production, you must obtain your server certificate from a public trusted authority and use a domain name you own.