Android Extensions
This section covers Android UI Extensions and Getting Started with AAMVA for Android.
UI Extensions
The UI Extension library helps developers to display a liveness challenge for the Liveness.ACTIVE
setting (The challenge is called Join the Points). It’s widely customizable in order to adapt to many different applications.
Prerequisites
Skills Required
The developers need knowledge of:
- Android Studio
- Java/Kotlin
- Android
Resources Required
The library is distributed as a maven artifact, aar
package. It is recommended to use the Capture SDK repository to handle dependency management.
Getting Started
Adding the Library to your Project
Here is the Maven artifact (from Artifactory) with a Capture SDK repository configuration.
Groovy1buildscript {2 repositories {3 maven {4 url "$repositoryUrlMI"5 credentials {6 username "$artifactoryUserMI"7 password "$artifactoryPasswordMI"8 }9 }10 ...11 }12 ...13}
repositoryUrlMI: Mobile Identity artifactory repository url
artifactoryUserMI: Mobile Identity artifactory username
artifactoryPasswordMI: Mobile Identity artifactory password
These properties can be obtained through portal and should be stored in local gradle.properties file. In such case credentials will not be included in source code. Configuration of properties:
XML1artifactoryUserMI=artifactory_user2artifactoryPasswordMI=artifactory_credentials3repositoryUrlMI=https://mi-artifactory.otlabs.fr/artifactory/smartsdk-android-local
More about gradle properties can be found here.
Note: The UI Extensions dependency declaration: X.Y.Z
should be replaced with the proper version number (for example: 1.2.6).
Java1dependencies {2 implementation "com.idemia.smartsdk:ui-extensions:X.Y.Z@aar"3 ...4}
Integrating with the UI Extension
Setting up the Layout
Before the challenge can be configured, add the view that will be responsible for displaying everything.
XML1<?xml version="1.0" encoding="utf-8"?>2<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"3 xmlns:app="http://schemas.android.com/apk/res-auto"4 android:layout_width="match_parent"5 android:layout_height="match_parent"6 app:layout_behavior="@string/appbar_scrolling_view_behavior">78 <com.idemia.biometricsdkuiextensions.ui.scene.view.SceneView9 android:id="@+id/sceneSurface"10 app:showBorders="true"11 android:layout_width="match_parent"12 android:layout_height="match_parent"/>13</android.support.constraint.ConstraintLayout>
The property showBorders
describes if borders where points can be displayed, should be visible or not.
Setting up the Scene Controller
Before creating the proper controller for the scene, options need to be configured to be passed to the controller’s constructor. It’s similar to the SDK’s options. There is a DSL configured so that the setting options in Kotlin are much more convenient.
You must use one of the settings below depending on the selected mode:
Kotlin1joinThePointsChallengeSettings {}2passiveCaptureSettings {}3fingerCaptureSettings {}4passiveVideoCaptureSettings {}
JoinThePointsCaptureSettings (example configuration)
Kotlin1val settings = joinThePointsChallengeSettings {2 targetCount = 43 useInterpolation = true4 scene {5 overlay {6 showOverlay = true7 imageRes = R.drawable.ic_face_overlay8 marginVertical = R.dimen.default_face_overlay_vertical_padding9 marginHorizontal = R.dimen.default_face_overlay_vertical_padding10 text {11 text = R.string.default_overlay_text12 textSize = R.dimen.default_overlay_text_size13 textColor = Color.parseColor(Colors.text_black)14 }15 }16 capturedLineOpacity = 0.5f17 background {18 colorEnd = Color.parseColor("#189482")19 colorStart = Color.parseColor("#38ddb8")20 }21 pointer {22 type = PointerType.PULSING23 collisionWithTargetAction = PointerCollisionAction.NONE24 }25 target {26 notSelectedImageResId = R.drawable.ic_target_free27 capturedImageResId = R.drawable.ic_target_joined28 capturedImageSolidColor = Color.CYAN29 failedImageResId = R.drawable.ic_challenge_failed30 selectedImageResId = R.drawable.ic_target_connecting_light_blue31 startingImageResId = R.drawable.ic_target_connecting32 capturedTargetOpacity = 1f33 displayTextSettings = TextSettings.ALL34 pulseAnimation {35 waves = 236 }37 progressColor = Color.WHITE38 textColor = Color.RED39 showMarkOnCurrentTarget = true40 }41 result {42 failureImageResId = R.drawable.ic_challenge_failed43 successImageResId = R.drawable.ic_challenge_success44 }45 previewScale {46 scaleX = 1.0f47 scaleY = 1.0f48 }49 tapping {50 colorBackground = Color.parseColor("#FAFAFA")51 colorImage = Color.parseColor("#101010")52 colorText = Color.parseColor("#101010")53 textResId = R.string.use_your_head54 textH1ResId = R.string.no_tapping_feedback55 enabled = true56 }57 }58 }
PassiveCaptureSettings (example configuration)
Kotlin1val settings = passiveCaptureSettings {2 scene {3 overlay {4 showOverlay = true5 imageRes = R.drawable.ic_face_overlay6 marginVertical = R.dimen.default_face_overlay_vertical_padding7 marginHorizontal = R.dimen.default_face_overlay_vertical_padding8 text {9 text = R.string.default_overlay_text10 textSize = R.dimen.default_overlay_text_size11 textColor = Color.parseColor(Colors.text_black)12 }13 }14 background {15 colorEnd = Color.parseColor("#189482")16 colorStart = Color.parseColor("#38ddb8")17 }18 previewScale {19 scaleX = 1.0f20 scaleY = 1.0f21 }22 feedback {23 colorText = Color.parseColor(Colors.white)24 }25 overlay {26 showOverlay = true27 }28 tapping {29 colorBackground = Color.parseColor("#FAFAFA")30 colorImage = Color.parseColor(Colors.black)31 colorText = Color.parseColor(Colors.black)32 textResId = R.string.use_your_head33 textH1ResId = R.string.no_tapping_feedback34 enabled = true35 }36 verticalTilt {37 colorBackground = Color.parseColor("#FAFAFA")38 colorImage = Color.parseColor("#000000")39 colorText = Color.parseColor("#000000")40 textResId = R.string.device_vertical_tilt_feedback41 enabled = true42 }43 countdown {44 countdownSeconds = 345 }46 delay {47 isEnabled = true48 message = R.string.capture_delay_message49 }50 }51}
FingerCaptureSettings (example configuration)
Kotlin1val settings = fingerCaptureSettings {2 scene {3 background {4 colorEnd = Color.parseColor("#189482")5 colorStart = Color.parseColor("#38ddb8")6 }7 rectangle {8 color = Color.BLACK9 strokeWidth = 20f10 cornerRadius {11 rx = 20f12 ry = 20f13 }14 }15 previewScale {16 scaleX = 1.0f17 scaleY = 1.0f18 }19 tapping {20 colorBackground = Color.parseColor("#FAFAFA")21 colorImage = Color.parseColor("#000000")22 colorText = Color.parseColor("#000000")23 textResId = R.string.use_your_head24 textH1ResId = R.string.no_tapping_feedback25 enabled = true26 }27 feedback {28 feedbackStringMapping = mapping29 show = true30 }31 distance {32 range = convertDistanceRange(handler.getCaptureDistanceRange())33 showOptimalDistanceIndicator = true34 }35 progressBar {36 labelRes = R.string.scanning37 show = true38 }39 }40}
PassiveVideoCaptureSettings (example configuration)
Kotlin1val settings = passiveVideoCaptureSettings {2 scene {3 preparationScene {4 backgroundColor = Color.WHITE5 }6 faceOverlay {7 progressBar {8 progressFill = Color.GREEN9 }10 }11 background {12 colorEnd = Color.parseColor("#189482")13 colorStart = Color.parseColor("#38ddb8")14 }15 previewScale {16 scaleX = 1.0f17 scaleY = 1.0f18 }19 feedback {20 videoBackground { }21 }22 tapping {23 colorBackground = Color.parseColor("#FAFAFA")24 colorImage = Color.parseColor("#000000")25 colorText = Color.parseColor("#000000")26 textResId = R.string.use_your_head27 textH1ResId = R.string.no_tapping_feedback28 enabled = true29 }30 verticalTilt {31 colorBackground = Color.parseColor("#FAFAFA")32 colorImage = Color.parseColor("#000000")33 colorText = Color.parseColor("#000000")34 textResId = R.string.device_vertical_tilt_feedback35 enabled = true36 }37 delay {38 isEnabled = true39 message = R.string.capture_delay_message40 }41 }42 }
There is no need to configure every option. Each option has a default variant. however, targetCount
(in Join The Points mode) must be equal to the count in FaceCaptureOptions
from Capture SDK.
A description of the each option can be found below in the "Options" section.
Now we are ready to create a scene controller that will manage the challenge drawing, based on the input we provide it.
Kotlin1...2val sceneController = JoinThePointsSceneController(sceneSurface, settings)
Kotlin1...2val sceneController = PassiveCaptureSceneController(sceneSurface, settings)
Kotlin1...2val sceneController = FingerCaptureSceneController(sceneSurface, settings)
Kotlin1...2val sceneController = PassiveVideoCaptureSceneController(sceneSurface, settings)
Where sceneSurface
is an instance of SceneView
that is added to the layout of Activity/Fragment with the challenge.
Use Settings with Java
Using Java is also possible. In order to do that, use the settings classes directly instead of Kotlin DSL:
- For Join The Points mode:
JoinThePointsChallengeSettingsBuilder
orJoinThePointsChallengeSettings
- For Passive mode:
PassiveCaptureSettings
orPassiveSettingsBuilder
- For Finger Capture:
FingerCaptureSettings
orFingerCaptureSettingsBuilder
- For VideoPassive mode:
PassiveVideoCaptureSettings
orPassiveVideoSceneSettingsBuilder
Using Scene Controller in Capture SDK’s Callbacks
Starting the Challenge
The controller can be used once it's configured. Call the start()
method on scene controller immediately when starting the capture on the SDK's handler.
We need to start preview first, then start scene controller which is asynchronous call, and after its finish it work we should call start capture.
- start scene controller asynchronously (capture preview will be started automatically)
- start capture
There are 4 ways to start scene controller: 2 using coroutines and 2 using callback:
- Using coroutines:
Kotlin1CoroutineScope(Dispatchers.Main).launch {2 sceneController.start(captureHandler)3 captureHandler.startCapture()4}
Kotlin1CoroutineScope(Dispatchers.Main).launch {2 sceneController.start()3 captureHandler.startCapture()4}
- Using callbacks:
Java1captureHandler.startPreview(new PreviewStatusListener() {2 @Override3 public void onStarted() {4 try {5 sceneController.start(captureHandler) {6 captureHandler.startCapture()7 }8 } catch (MSCException e) {9 // handle exception10 }11 }1213 @Override14 public void onError(PreviewError error) {15 // Preview initialization failed and can not be started16 }17 })
Java1captureHandler.startPreview(new PreviewStatusListener() {2 @Override3 public void onStarted() {4 try {5 sceneController.start() {6 captureHandler.startCapture()7 }8 } catch (MSCException e) {9 // handle exception10 }11 }1213 @Override14 public void onError(PreviewError error) {15 // Preview initialization failed and can not be started16 }17 })
Pausing or Stopping the Challenge
Stop both the capture and the scene when the activity pauses or when it's desired to stop the challenge.
Java1captureHandler.stopCapture()2sceneController.stop()
Closing the Challenge
Release the resources when closing the whole challenge or activity.
Java1captureHandler.destroy()2sceneController.destroy()
Updating the Challenge Status
Updating the challenge status is required in order to have a full working experience. Update the challenge status, so that the sceneController
knows how to redraw all of the elements in the scene. This allows the ability to collect proper data from the SDK's callbacks and then push it to the controller.
Kotlin1captureHandler.setBioTrackingListener { trackingList -> sceneController.onTracking(trackingList) }23 captureHandler.setBioCaptureCR2DListener(object : BioCaptureCR2DListener {4 override fun onCurrentUpdated(point: Cr2dCurrentPoint?) {5 if (point != null) {6 sceneController.update(point)7 }8 }910 override fun onTargetUpdated(target: Cr2dTargetPoint?) {11 if (target != null) {12 sceneController.update(target)13 }14 }1516 override fun onTargetsConditionUpdated(targetsNumber: Int, stability: Int) {17 sceneController.update(targetsNumber, stability)18 }19 })2021 captureHandler.setBioCaptureResultListener(22 BioCaptureResultListenerAdapter(23 { sceneController.captureSuccess { println("CAPTURE SUCCES") } },24 { error ->25 if (error == CaptureError.CAPTURE_TIMEOUT) {26 sceneController.captureTimeout { println("CAPTURE TIMEOUT") }27 } else {28 sceneController.captureFailure { println("CAPTURE FAILURE") }29 }30 }))
Updating the PassiveVideo capture
To provide good user experience beside standard cases like capture success or failure, all callbacks from FaceVideoPassiveListener from CaptureSDK should be covered. Handling them is quite easy, just pass values from SDK's callbacks to proper PassiveVideoCaptureSceneController methods as on example below.
Kotlin1class FaceVideoPassiveListenerAdapter(2 val preparationStarted: () -> Unit,3 val preparationFinished: () -> Unit,4 val overlayUpdatedCallback: (OvalOverlay) -> Unit,5 val progressUpdatedCallback: (Float) -> Unit6) : FaceVideoPassiveListener {7 override fun onPreparationFinished() {8 preparationFinished()9 }1011 override fun onPreparationStarted() {12 preparationStarted()13 }1415 override fun overlayUpdated(overlay: OvalOverlay) {16 overlayUpdatedCallback(overlay)17 }1819 override fun progressUpdated(progress: Float) {20 progressUpdatedCallback(progress)21 }22}
Kotlin1captureHandler.setFaceVideoPassiveListener(2 FaceVideoPassiveListenerAdapter(3 { sceneController.showPreparingView() },4 { sceneController.hidePreparingView() },5 { sceneController.ovalOverlayUpdate(6 FaceOval(7 it.width,8 it.height,9 it.centerX,10 it.centerY11 )12 ) },13 { sceneController.updateProgress(it) }14 )15)
Options
Interpolation
useInterpolation
is the preferred option for "low-end" phones where pointer movement may look “sluggish”. This option is disabled by default.
It is used only in JoinThePointsCaptureSettings.
Pointer
Configuration of pointer in Join The Points capture mode. It contains following parameters:
-
collisionWithTargetAction
describes an action where the pointer collides with actual target. There is no action by default. To hide the pointer after collission, set this option toPointerCollisionAction.HIDE
. -
solidColor
is used to set the pointer color. -
imageResourceId
as name suggests is a resource id of the pointer. -
type
specifies the type of pointer available:
- a standard image with pulsing animation under the image (
PointerType.PULSING
). - a rotating image towards the actual target (
PointerType.TRACKING
).
If type
has been set to PointerType.PULSING, additional pulseAnimation
configuration can be provided. For this animation you can set:
waves
- number of waves inside animation.color
- color of the animation.minAlpha
andmaxAlpha
- ranges of animation opacity.
Pointer
is used only in JoinThePointsCaptureSettings.
Whole pointer configuration with default values:
Kotlin1pointer {2 solidColor = Color.parseColor(Colors.black)3 imageResourceId = R.drawable.dot_pointer4 type = PointerType.PULSING5 collisionWithTargetAction = PointerCollisionAction.NONE6 pulseAnimation {7 waves = 28 minAlpha = 0.4f9 maxAlpha = 0.8f10 color = Color.parseColor(Colors.pulse_wave_color)11 }12}
Target
Configuration of targets in Join The Points capture mode. It contains following parameters:
-
displayTextSettings
since targets need to be captured in a specific order, the points are numbered by default. There is also one extra point, the starting point. This option allows an integrator to enable or disable the default text in specific places. -
notSelectedImageResId
- resource id of the image for the point that is not active yet. -
notSelectedSolidColor
- color of the image for the point that is not active yet. -
selectedImageResId
- resource id of the image for the active point. -
selectedImageSolidColor
- color of the image for the active point. -
capturedImageResId
- resource id of the image that will be displayed inside the dot that has already been captured. -
capturedImageSolidColor
- color of the image inside fulfilled dot. -
startingImageResId
- resource id of the image that will be displayed inside starting point. -
startingPointSolidColor
- color of the image for the starting point. -
startingPointSize
is a size of a starting point. It takes data class of typeTargetSize
, which consists ofwidthInPixels
andheightInPixels
. -
progressColor
- color of progress indicator inside the current point. -
showMarkOnCurrentTarget
- if set to true, animation with arrows that shows which point is the current will be shown. -
textColor
- color of the text inside the points. -
capturedTargetOpacity
- opacity of the point thas bas already been captured. -
pulseAnimation
- configuration of pulsing animation around the points. For this animation you can set:waves
- number of waves inside animation. 2 by default.color
- color of the animation.minAlpha
andmaxAlpha
- ranges of animation opacity.
Whole target configuration with default values:
Kotlin1target {2 displayTextSettings = TextSettings.ALL3 notSelectedImageResId = R.drawable.ic_target_free4 notSelectedSolidColor = Color.parseColor("#430099")5 selectedImageResId = R.drawable.ic_target_connecting_light_blue6 selectedImageSolidColor = Color.parseColor("#007dba")7 capturedImageResId = R.drawable.ic_target_joined8 capturedImageSolidColor = Color.parseColor("#430099")9 startingImageResId = R.drawable.ic_target_connecting10 startingPointSolidColor = Color.parseColor("#430099")11 startingPointSize = TargetSize(136, 136)12 progressColor = Colors.parseColor("#330370")13 showMarkOnCurrentTarget = false14 textColor = Colors.white15 capturedTargetOpacity = 1f16 pulseAnimation = {17 waves = 218 minAlpha = 0.4f19 maxAlpha = 0.8f20 color = Color.parseColor(Colors.pulse_wave_color)21 }22}
Target
is used only in JoinThePointsCaptureSettings.
Tapping Feedback
Refers to the built-in feedback displayed on SceneView
.
This feedback show a message preventing the user from tapping further on the screen.
You can set the look of Tapping Feedback
in Kotlin DSL.
Tapping Feedback
is enabled by default. You can prevent Tapping Feedback
from being displayed by setting enabled
on false
It is used in all modes.
Kotlin1tapping {2 colorBackgroundResId = R.color.default_tapping_feedback_background3 colorImageResId = R.color.black4 colorTextResId = R.color.black5 textResId = R.string.no_tapping_feedback6 enabled = true7}
Device vertical tilt Feedback
This feedback is displayed on SceneView
before face capture. The feedback displays until the phone is held vertically.
You can set the look of Device vertical tilt Feedback
in Kotlin DSL.
Device vertical tilt Feedback
is enabled by default. You can disable it by setting enabled
option on false
value.
It is used in JoinThePointsCaptureSettings, PassiveCaptureSettings.
Kotlin1verticalTilt {2 colorBackgroundResId = R.color.default_device_vertical_tilt_feedback_background3 colorImageResId = R.color.black4 colorTextResId = R.color.black5 textResId = R.string.device_vertical_tilt_feedback6 enabled = true7}
Capture delay
UI extension can handle capture delay. It will display a message with countdown when capture is available. It is turned on by default, but you can turn it off.
Kotlin1delay {2 isEnabled = true3 message = R.string.delay_message4}
In message resource, add text parameter that will change to counter. For example, this is the default text used in UIExtension with text parameter:
XML1<string name="capture_delay_message">Authentication locked.\nPlease wait for:\n%1$s</string>
Feedbacks
You can set look of feedback and countdown by setting them in Kotlin DSL for Passive.
- By default feedbacks are turned on - if you want to disable it you can set
showFeedback
tofalse
- You can provide feedback type to message mapping by setting
feedbackStringMapping
, which has function type(FaceCaptureInfo) -> String
. It has default mapping in English only.
Below are the default messages for FaceCaptureInfo:
FaceCaptureInfo | String | Comment |
---|---|---|
INFO_COME_BACK_FIELD | Come back in the camera field | |
INFO_CENTER_TURN_LEFT, INFO_CENTER_TURN_RIGHT, INFO_CENTER_ROTATE_DOWN, INFO_CENTER_ROTATE_UP, INFO_CENTER_TILT_LEFT, INFO_CENTER_TILT_RIGHT, | Center your face in camera view | |
INFO_CENTER_MOVE_FORWARDS | Move your face forward | |
INFO_CENTER_MOVE_BACKWARDS | Move your face backward | |
INFO_TOO_FAST | Moving to fast | |
CENTER_GOOD | Face is in good position | Used only in Passive mode |
INFO_DONT_MOVE | Not showing any message. It is called just before taking capture in passive mode | |
INFO_CHALLANGE_2D | Connect the dots | Used only in Join The Points mode |
INFO_STAND_STILL | Stand still for a moment | Used for best face image selection before starting challenge. It is used for all modes. It means that face position is good, do not move anymore. |
INFO_NOT_MOVING | Move your head to connect the dots / Move your head | It is sent, when user not performing a task. Used in Join The Points, and have different message for them. |
DEVICE_MOVEMENT_ROTATION | Don't move your phone |
It is used in JoinThePointsCaptureSettings, PassiveCaptureSettings.
Kotlin1val settings = passiveCaptureSettings {2 scene {3 ...4 feedback {5 background {6 colorBackgroundResId = R.color.idemia_blue7 alphaCanal = 0.5f8 }9 showFeedback = true10 colorTextResId = R.color.white11 faceFeedbackStringMapping = { faceCaptureInfo ->12 mapToString(faceCaptureInfo) // This should return correct String for FaceCaptureInfo feedback13 }14 }15 }16 }
Setting up the Countdown for Passive
For passive settings, set the countdown for which one will be displayed at the top of the scene.
-
To set the countdown time, choose it in Kotlin DSL for the passive variable
countdownSeconds
incountdown
settings.countdownSeconds
is default set on 0 (OFF). -
Choose the text of the countdown. (The default is "Countdown..." andonly in English).
-
If countdown is turned on, it will start automatically before capture.
Kotlin1val settings = passiveCaptureSettings {2 countdown{3 countdownSeconds = 34 countdownText = getString(R.string.countdown)5 }6 }
Setting up the Overlay for Passive or JoinThePoints
Turn on the face overlay when using passive/active capabilities by adding its configuration in the Kotlin DSL for passive/active settings.
- To turn on the overlay, set
showOverlay
totrue
(default). - Change the image of overlay by setting
imageRes
. - Overlay has a set width and height to
match_parent
permanently, but choosing vertical and horizontal margins:marginVertical
andmarginHorizontal
(default is 16dp for both) is optional. - Setup text in the middle of an overlay in the
text
section by changing:text
(default "Center\nyour\nface"),size
(default 24sp)color
(default #010101)
Kotlin1val settings = passiveCaptureSettings {2 ...3 overlay {4 showOverlay = true5 imageRes = R.drawable.ic_face_overlay6 marginVertical = R.dimen.overlay_vertical_margin7 marginHorizontal = R.dimen.overlay_horizontal_margin8 text {9 text = R.string.overlay_text10 textSize = R.dimen.overlay_text_size11 textColor = R.color.overlay_text_color12 }13 }14}
Setting result success and failure indicators
Success and Failure indicators can be applied to JoinThePointsCaptureSettings, PassiveCaptureSettings. However, it looks different in Join The Points and other modes:
- In Join The Points mode, it is animated from the last point that was connected.
- In Passive modes, a static image display for some time.
You can turn on Success and Failure indicators after face capture. To show indicators you need to inform scene controller about capture result by invoking the following methods:
sceneController.captureSuccess { ... Your code on success ... }
sceneController.captureFailure { ... Your code on failure ... }
sceneController.captureTimeout { ... Your code on timeout ... }
Invoke them in the FaceCaptureResultListener
:
Kotlin1captureHandler.setFaceCaptureResultListener(object : FaceCaptureResultListener {23 override fun onCaptureFailure(captureError: CaptureError, biometricInfo: IBiometricInfo, extraInfo: Bundle) {4 if (captureError == CaptureError.CAPTURE_TIMEOUT) {5 sceneController.captureTimeout {6 // Your code on timeout7 }8 } else {9 sceneController.captureFailure {10 // Your code on failure11 }12 }13 }1415 override fun onCaptureSuccess(image: FaceImage) {16 sceneController.captureSuccess {17 // Your code on success18 }19 }20 })
The appearance of indicators is configurable in the result
section in the Kotlin DSL.
- You can change the time of the indicator by setting
resultDurationInMillis
(only for Passive). - You can change the images for success and failure indicators by setting
successDrawableRes
andfailureDrawableRes
Kotlin1val settings = passiveCaptureSettings {2 ...3 result {4 successDrawableRes = R.drawable.ic_challenge_success5 failureDrawableRes = R.drawable.ic_challenge_failed6 resultDurationInMillis = 10007 }8}
Setting preparation screen for PassiveVideo mode
Screen might be displayed on preparation phase of PassiveVideo mode.
backgroundColor
as name suggests is a color of a background. Default value is "#FFFFFF".colorProgressFill
is a property corresponding to progress bar color. Default value is "#430099".colorProgressBackground
is background of progress bar. Default value is "#33430099".colorTextTitle
is color of a title text on preparation screen. Default value is "#000000".colorTextDescription
is color of a description text on preparation screen. Default value is "#808080".
Setting face overlay for PassiveVideo mode
Oval face overlay making face capture of PassiveVideo mode much easier as it shows where face should be placed.
backgroundColor
as name suggests is a color of a background. Default value is "#153370".backgroundAlpha
is a property ranging from 0.0 to 1.0 describing how transparent background around oval is. Default value is 0.8f.progressBar
is field for setting progress around the oval. It has two properties:progressFill
with default value of "#FFA000".progressBackground
with default value of "#FFFFFF".
scanningFeedback
is a message to the user. Default value is "Scanning... Stay within the oval".
Additional components
CaptureResultImageView
This is an custom Android's view that can be embedded within app's layout. It contains two methods:
- fun setImage(image: Bitmap) that sets image inside oval. Such image should be cropped and ideally be a square in order to display properly.
- fun setStrokeColor(@ColorInt color: Int) which sets border color and checkmark background around image.
TutorialView
This is an component to show tutorials provided by TutorialProvider in NFC Reader library. It contains one method:
fun start(animation: ByteArray, listener: TutorialListener?) that sets and start animation in lottie format.
TutorialListener
This listener provides the information when animation ends.
onAnimationComplete()
This method is called when animation end.
Android AAMVA
The AAMVADecoder framework is targeted to developers who need to decode AAMVA within their mobile apps.
Prerequisites
Skills Required
The developers need knowledge of:
- Android Studio
- Java/Kotlin
- Android OS 4.1 or above
- Gradle
Resources Required
The tools required are:
- Android Studio
- Gradle Wrapper, preferred v4.4
Integration Guide
Adding a Library to your Project
To add a dependency to project, the artifactory repository needs to be configured. Replace user
and password
with the proper credentials.
Groovy1buildscript {2 repositories {3 maven {4 url "$repositoryUrlMI"5 credentials {6 username "$artifactoryUserMI"7 password "$artifactoryPasswordMI"8 }9 }10 ...11 }12 ...13}
repositoryUrlMI: Mobile Identity artifactory repository url
artifactoryUserMI: Mobile Identity artifactory username
artifactoryPasswordMI: Mobile Identity artifactory password
These properties can be obtained through portal and should be stored in local gradle.properties file. In such case credentials will not be included in source code. Configuration of properties:
XML1artifactoryUserMI=artifactory_user2artifactoryPasswordMI=artifactory_credentials3repositoryUrlMI=https://mi-artifactory.otlabs.fr/artifactory/smartsdk-android-local
More about gradle properties can be found here.
Then add the following implementation line to the gradle dependencies.
Groovy1dependencies {2...3 implementation 'com.idemia.aamva:aamva-parser:1.0.13'4...5}
Using the Library
Create an instance of the AAMVADecoder
object.
Java1AAMVADecoder decoder = new AAMVADecoder(context);
Now the decoder is ready to use. After scanning the PDF417 barcode, the value can be decoded.
Java1decoder.initWithPDF417("Scanned PDF417 value");2Document decodedDocument = decoder.getDocument();
Now the Document
object is ready to use with the values fetched from the PDF417 barcode.
NFC Reader
The NFC Reader library is the mobile part of the NFC Document Reading Solution. The core of the solution is the NFC Server (minimum supported version is 2.2.2), which collects and process the read data. Once the whole document's data is read, it is available to securely download from/or to push by the NFC Server.
This library allows the ability to read ICAO compliant passports.
Quick integration guide
- Add dependency in your project's build.gradle file:
XML1implementation ("com.idemia.smartsdk:smart-nfc:$smartNfcVersion")
-
Create Identity on GIPS Relying Service (gips-rs) component using v1/identities endpoint. More informations can be found here.
-
Create NFC session using MRZ lines fetched from the document and Identity id from previous step using v1/identities/{identityId}/id-documents/nfc-session endpoint. More informations can be found here.
-
Create the NFCReader object. This is an entry point to whole document reading procedure.
-
The parameter
configuration
is the NFCConfiguration object with the customer identifier in the ID&V cloud. -
The parameter
activity
is reference to Android's AppCompatActivity where the reader will be running.
Java1NFCConfiguration configuration = new NFCConfiguration()2NFCReader reader = new NFCReader(configuration, activity);
- Check if the device is compatible with this feature:
Java1reader.isDeviceCompatible();
-
If the device is not compatible, reading will fail regardless. Consider displaying some feedback to the end-user or handle it in different way (like hiding this feature in the app).
-
If the device is compatible, then the reading process might be started. In order to do that,
session id
will be required. This ID should be obtained from the ID&V cloud.
The reading process can be started using classic listener interfaces or by subscribing to the observable
object. It's up to integrator which way is preferred.
Warning: using the observable
object way require subscribing to the object returned in the start
method in order to start reading procedure (it's cold observable).
Java1reader.start(sessionId, new ResultListener() {2 @Override3 public void onSuccess(@NotNull String documentDataAccessToken) {4 //Reading finished with success5 }67 @Override8 public void onFailure(@NotNull Failure failure) {9 //Reading finished with failure10 }11 }, new ProgressListener() {12 @Override13 public void onProgressUpdated(int progress) {14 //Progress update15 }16 });
Components
Configuration
NFCConfiguration
This is the configuration class that contains information about the server url, customer identifier (apiKey
), and which logs are available for viewing.
Parameter | Description |
---|---|
serverUrl String | The URL of the service where the reader can reach the NFCServer's device API. |
serverApiKey String | API key used for the authorization process. |
sdkExperience SDKExperience | Configuration for SDKExperience. It can be null if you don't want use TutorialProvider. |
logLevel LogLevel | Logging level. |
SDKExperience
This class is configuration for SDKExperience. Need to get animation provided by IDEMIA and localisation of NFC chip on document/phone.
Parameter | Description |
---|---|
serviceUrl String | The URL of the service where the TutorialProvider can reach SDKExperience API. (It has default value) |
apiKey String | API key used for the authorization process. |
assetsUrl String | The URL of the service where the TutorialProvider can reach animmations. (It has default value) |
LogLevel
This is the enum used to configure the behavior of logs.
Attribute | Description |
---|---|
INFO | Show info logs |
DEBUG | Show debug logs |
ERROR | Show error logs |
NONE | Do not show logs |
NFCReader
This is the main class that is an entry point to every activity connected with the document reading process.
All listeners for methods below will be called on the main thread.
start(sessionId: String, resultListener: ResultListener, mrz: String, phoneNFCLocation: PhoneNFCLocation)
This starts the document reading process. It requires session id
as a parameter to fetch communication scripts.
ResultListener is the interface that allows the ability to receive the reading result.
-
mrz information saved on document.
-
PhoneNFCLocation information about phone NFC antenna localisation.
start(sessionId: String, resultListener: ResultListener, progressListener: ProgressListener, mrz: String, phoneNFCLocation: PhoneNFCLocation)
This starts the document reading process. It requires the session id
as a parameter to fetch communication scripts.
-
ResultListener is the interface that allows the ability to receive the reading result.
-
ProgressListener is the interface that allows the ability to receive the progress feedback.
-
mrz information saved on document.
-
PhoneNFCLocation information about phone NFC antenna localisation.
start(sessionID: String, mrz: String, phoneNFCLocation: PhoneNFCLocation): Observable
This is a Kotlin-friendly method that returns the object to which we can subscribe in order to start the capture and get capture feedback.
cancel()
This stops the reading procedure. This method does the background job.
isDeviceCompatible()
This checks if the device satisfies all hardware/software requirements connected with the document reading feature.
Java1if(reader.isDeviceCompatible()) {2 //start reading3} else {4 //display message that device is not compatible5}
getTutorialProvider()
This is a provider for getting information about NFC location and document type. It also provides animation based on NFC location.
ResultListener
This listener provides the possibility to invoke code based upon the reading result.
onSuccess(documentDataAccessToken: String)
This method is called when the document has been read successfully. The argument contains the access token for the document data.
onFailure(failure: Failure)
This method is called when the document reading fails. This method's argument represents the failure reason. More about failures here.
ProgressListener
This provides the progress of the document reading.
onProgressUpdated(progress: Int)
This is the method called when progress changes. The argument progress
can be in the range 0 - 100.
Failure
This contains information about the document reading failure. It's built from message
and type
. Type
is a more general failure cause (more than one failures might have the same type). The message contains detailed information what happened for a given type.
Failure types
- NFC_CONNECTION_BROKEN - NFC connection has been broken
- CONNECTION_ISSUE - Cannot connect with external server, no internet connection
- INVALID_SESSION_STATE - Session is in an unexpected state. New one needs to be created.
- SERVER_CONNECTION_BROKEN - Cannot process data with the server side, might be a compatibility issue
- SERVER_ERROR - Server side error occurred
- UNSUPPORTED_DEVICE - Device does not support NFC or it's disabled
- READING_ISSUE - Document reading issue occurred. Can be related with NFC issues and data converting
- REQUESTS_LIMIT_EXCEEDED - Document reading is impossible because of too many requests to server has been made or API key request limit has been exceeded
TutorialProvider
This class allows to get information about NFC antenna location on phone and document. It also provides animation DocumentType and animation based on the previous three variables.
getNFCLocation(mrz: String, listener: NFCLocationListener)
This provide phone and document NFC antenna location on callback. It also provide document type.
getNFCLocation(mrz: String)
This is a Coroutines-friendly method that returns the NFCLocationResult.
getAnimation(phoneNFCLocation: PhoneNFCLocation, documentNFCLocation: DocumentNFCLocation, documentType: DocumentType, documentFeature: String? = null, listener: AnimationListener)
This provide animation in lottie format on callback.
getAnimation(phoneNFCLocation: PhoneNFCLocation, documentNFCLocation: DocumentNFCLocation, documentType: DocumentType, documentFeature: String? = null)
This is a Coroutines-friendly method that returns the AnimationResult.
NFCLocationListener
This listener provides the information about phone and document NFC antenna location on callback. It also provide document type.
onNFCLocation(nfcLocation: NFCLocation)
This method is called when we get information about NFC antenna location. This method's argument provides information about NFC and document type.
onFailure(failure: LocationFetchFailure)
This method is called when we failed to fetch information about location. This method's argument provides information of failure reason.
AnimationListener
This listener provides animation prepare by IDEMIA
onAnimationProvided(animation: ByteArray)
This method is called when we get animation. The method argument is a animation in lottie format.
onFailure(failure: AnimationFetchFailure)
This method is called when we failed to fetch animation. This method's argument provides information of failure reason.
NFCLocationResult
NFCLocation
This is the class which contains information about phone and document NFC antenna location on callback. It also contain document type information.
Parameter | Description |
---|---|
phoneNFCLocation List< PhoneNFCLocation > | NFC antenna location on phone. If NFC antenna location is for us unknown we return list of possible locations. |
documentNFCLocation DocumentNFCLocation | NFC chip location on document. If we do not have information about location we return as a default FRONT_COVER. |
documentType DocumentType | Document type which have mrz. |
documentFeature String | Additional information about document. |
LocationFetchFailure
This is the class which contains reason why fetching nformation about phone and document NFC antenna failed.
Parameter | Description |
---|---|
message String | Descryption of the failure |
type TutorialFailure | General failure cause (more than one failures might have the same type). |
PhoneNFCLocation
This is the enum with information about phone NFC antenna localisation.
Attribute | Description |
---|---|
TOP | NFC antenna is in top of phone |
MIDDLE | NFC antenna is in middle of phone |
BOTTOM | NFC antenna is in bottom of phone |
SWIPE | o not know when antenna is please move your phone on the document |
DocumentNFCLocation
This is the enum with information about document NFC location.
Attribute | Description |
---|---|
FRONT_COVER | NFC antenna is on cover of passport |
INSIDE_PAGE | NFC antenna is on the first of passport |
NO_NFC | Document do not have NFC antenna |
DocumentType
This is the enum with information about document type which have mrz.
Attribute | Description |
---|---|
PASSPORT | Passpor |
ID | eID |
UNKNOWN | Unknown |
AnimationResult
AnimationFetchSuccess
This is the class which contains the animation in lottie format.
Parameter | Description |
---|---|
animation ByteArray | Animation in lottie format. |
AnimationFetchFailure
Class containing reason of animation fetching failure.
Parameter | Description |
---|---|
message String | Descryption of the failure |
code Integer | Code of the failure |
type TutorialFailure | General failure cause (more than one failures might have the same type). |
Failure types
- CONNECTION_ISSUE - Cannot connect with external server
- NO_INTERNET_CONNECTION - No internet connection
- SERVER_ERROR - Server side error occurred
- UNSUPPORTED_DEVICE - Device does not support NFC or it's disabled
- READING_ISSUE - Fetching information about NFC and animation issue occurred. Can be related with data converting
- REQUEST_ERROR - Fetching information about NFC and animation is impossible because of too many requests to server has been made or API key request limit has been exceeded
- MRZ_ISSUE - Issue with parsing MRZ.
- DOCUMENT_TYPE_ISSUE - It only occur when there is no such animation for chosen DocumentType
Warning!
There is a possibility that after scanning an NFC chip, the user will not move their device away from the chip, and the chip will be scanned once again. This can make the device display a message about the scanned chip. Some devices (e.g. Huawei and Honor devices) exit the application and open a new window with a message about the newly-read NFC tag, which can make a bad user experience or disturb handling of NFC results in later steps.
To prevent this from happening, you can handle NFC scanning in the application even after the scan is finished. You don't have to do anything with the result, but it will prevent the application flow from being interrupted by a scanned tag.
To do this you need to enable reader mode in NFCAdapter
from android.nfc
by calling enableReaderMode()
method.
Or you can just create an NFCReader
in your activity that calls this method on Lifecycle.Event.ON_RESUME
and stops it on Lifecycle.Event.ON_PAUSE
.
The sample application uses a single Activity and NFCReader
is created in Activity initialization, so it has enabled reader mode all the time the app is running in foreground.
Sample Application
Below you will find instructions to add and run the sample NFC application.
Note: To run the sample NFC application, you must add LKMS and Artifactory credentials and also NFC and IPV (gips-rs) API keys to your global gradle.properties
.
Step 1: Obtain the API keys and credentials from the IDEMIA Experience Portal dashboard:
-
Follow the steps below to access the NFC API key:
-
Log in to the IDEMIA Experience Portal.
-
Go to My Dashboard -> My Identity Proofing.
The dashboard appears.
-
Under Access, navigate to Environments section to find the needed key.
-
-
Follow the steps below to access The IPV (gips-rs) API key:
-
Log in to the IDEMIA Experience Portal.
-
Go to My Dashboard -> My Identity Proofing.
The dashboard appears.
-
Under Access, navigate to Environments section to find the needed key.
-
-
Follow the steps below to access LKMS and Artifactory credentials:
-
Log in to the IDEMIA Experience Portal.
-
Go to My Dashboard -> My Identity Proofing.
The dashboard appears.
- Under Access, navigate to SDK artifactory and licenses section to find needed credentials.
-
Note: Remember to use the default environment (EU PROD) and confirm that serverUrl
value in NFCConfiguration
and serviceUrl
value in SDKExperience
is the same as the selected environment address.
Step 2: Place the NFC, IPV (gips-rs) API keys and LKMS, Artifactory credentials into your global gradle.properties
found in your gradle directory and are set by default to: USER_HOME/.gradle.
Language not specified1nfcApiKey="YOUR NFC API KEY"2ipvApiKey="YOUR IPV API KEY"34artifactoryUserMI=<artifactory user>5artifactoryPasswordMI=<artifactory credentials>6repositoryUrlMI=<repository url>78lkmsProfileId="YOUR LKMS PROFILE ID"9lkmsApiKey="YOUR LKMS API KEY"
Step 3: Fetch sample app source code as a .zip package from Artifactory.
Step 4: In app's source change NFCConfiguration to your match your tenant configuration: NFCConfiguration(serverUrl = "TENANT_URL/nfc/", serverApiKey = BuildConfig.nfcApiKey, sdkExperience = SDKExperience(serviceUrl = "TENANT_URL/sdk-experience/", apiKey = BuildConfig.nfcApiKey)). On production environment, default parameters will be most probably fine.
Step 5: In app's source change ServerConfigurationData to match your tenant configuration: ServerConfigurationData(serverUrl = "TENANT_URL/gips/",serverApiKey = BuildConfig.ipvApiKey). On production environment, default parameters will be most probably fine.
Step 6: Run app. If all steps have been applied properly there should not be any issues.