Beginner's Guide 

SampleApp is an app provided in the source code that already includes integration of Biometric Capture and Document Capture SDKs. The SDK integration developer can copy and paste the SampleApp code and adapt it for their context (such as personalizing the splash screen), allowing them to speed up implementation and to quickly arrive at a workable app with well-designed UX.

This guide provides information on how to obtain the application code and other artifacts needed to run it on your environment. It also explains the application code functionality step-by-step.

Note: This guide includes only the most useful methods for integrating the SDKs. For more detailed information about SDK integration, please see the Android SDK Integration Guide.

Requirements 

  • Development environment: Android Studio 4.0+, Windows, Mac or Linux, JDK 8.. JDK12.
  • Android device/emulator with Android 5.0 (API 21) or greater.
  • Knowledge of Kotlin programming language.
  • Internet connection.

Step-by-Step Guide 

  1. First of all fetch sample apps from Artifactory. There is separate project per SDK feature (document, face, finger capture). There are also dedicated apps for remote use cases.

  2. In order to build samples, proper credentials need to be provided. Sample projects use gradle properties for loading credentials. The recommended way to store these properties is local gradle.properties file located in .gradle/ directory on machine. All values ​​should be accessible through the Experience Portal in Access section: My dashboards -> My Identity Proofing -> Access. Under Environments section you can find backend API keys and under SDK artifactory and licenses, Artifactory and LKMS credentials are stored. Please find required properties below.

Properties
1#Artifactory
2artifactoryUserMI=artifactory_user
3artifactoryPasswordMI=artifactory_credentials
4repositoryUrlMI=https://mi-artifactory.otlabs.fr/artifactory/smartsdk-android-local
5
6#LKMS
7lkmsProfileId="profile_id"
8lkmsApiKey="lkms_api_key"
9
10#Backend API KEYS
11WbsApiKey="bio_server_api_key"
12gipsApiKey="gips_api_key"

More about gradle properties can be found here.

  1. Before using the Biometric functionality, you need to activate the license. For a detailed description of this step, please go to License Activation. Keep in mind that license is associated with application id. In order to properly run samples there is need to change application id of sample to your id that has been put within LKMS profile. Application id can be changed in build.gradle file that is placed in root directory of application module.
Application Id

You can find application ids associated with your LKMS profile in Experience Portal. Go: My dashboards -> My Identity Proofing -> Access -> SDK artifactory and licenses. You can find ids in row: Associated App(s) and add new one.

Associated Apps
  1. When the license is active, you can guide the user through the registration/authentication process described in the User Management section.

  2. Review the specific application by studying the FingerSampleApp, FaceSampleApp or DocumentSampleApp modules.

  3. When you run the face scanning application ( menu Run → Run 'FaceSampleApp'), you will be presented with the following screen:

SampleAppIntGuide FaceScanning
  1. When you run the document scanning application (menu Run → Run 'DocumentSampleApp'), you will be presented with the following screen:
SampleAppIntGuide DocumentSampleApp
  1. When you run the finger scanning application (menu Run → Run 'FingerSampleApp'), you will be presented with the following screen:
SampleAppIntGuide finger main screen

Finger Sample Application 

This is the top-tier application which uses both the License Manager and UserManager components which depend on Biometric Capture SDK.

The Finger Sample Application provides functionality for finger scanning, and can easily be used as a template for your own solution.

The following describes each application screen and explains how it was implemented. There is an assumption that a reader has sufficient technical knowledge about Android development to understand code samples.

Main Screen 

SampleAppIntGuide finger main screen

The main screen of the application gives you possibility to perform scans with following capture modes:

  • Four fingers - Single capture of four fingers. BioCaptureMode.FINGERS will be used for this capture.

  • Four fingers + thumb - two captures will be performed - the first one for four fingers, the second for thumb only.

  • Thumb - capture only one finger using BioCaptureMode.THUMB.

  • I'm an amputee - allow to choose which fingers are present before performing the capture. It use BioCaptureMode.FINGERS. Missing fingers are set with setAmputeeFingers(amputeeFingers: AmputeeFingers) method from IFingerCaptureOptions.

  • Latent - Mode which doesn't detect fingers - only scan indicated area.

FingerSampleApp amputee screen

Tutorials 

Fingers and thumb modes have dedicated tutorials:

FingerSampleApp fingers tutorial
FingerSampleApp thumb tutorial

Showing those tutorial is not mandatory and may be turned off in settings.

Settings 

There is an option to configure some of capture settings on the screen presented below. It is accessible from the main screen of the application.

FingerSampleApp settings screen

In this screen you can:

  • Decide whether liveness detection will be enabled for the capture or not - based on that FingerCaptureLiveness in FingerCaptureOptions will be set to FingerLiveness.NO_LIVENESS or FingerLiveness.MEDIUM.
  • Choose which hand will be captured.
  • Change capture timeout (when liveness is enabled and device is not calibrated it will be in fact time of the capture).
  • Turn on/off showing tutorials for the capture.

Those settings are handled by FingerAppSettingsStorage. Implementation of this screen can be found in SettingsFragment and SettingsPresenter.

Setting up the capture using FingerCaptureHandler 

The Sample App uses the MVP approach which decouple decision logic from the UI view. Logic associated with the capture can be found within CapturePresenter.

CapturePresenter implements ChallengerPresenter interface, which has only three methods, each of them is associated with view lifecycle.

Kotlin
1interface ChallengePresenter {
2 fun onResume()
3 fun onPause()
4 fun onDestroy()
5}

Implementation of those method is presented below:

Kotlin
1class CapturePresenter(
2 private val captureView: CaptureView,
3 private val handlerCreator: HandlerCreator,
4 private val sceneView: SceneView,
5 private val imageStore: ImageStore,
6 private val settingsStorage: FingerAppSettingsStorage,
7) : ChallengePresenter {
8
9 private var handler: FingerCaptureHandler? = null
10 private var sceneController: FingerCaptureSceneController? = null
11 private var job: Job? = null
12 private val fingerSettings = settingsStorage.load()
13
14 override fun onResume() {
15 handler = handlerCreator.createFingerCaptureHandler()
16 handler?.let { handler ->
17 val range = when (val result = handler.captureDistanceRange) {
18 is CaptureDistanceRangeSuccess -> result.captureDistanceRange
19 is CaptureDistanceRangeUnavailable -> null
20 }
21 sceneController = FingerCaptureSceneController(sceneView, settings(range))
22 attachListeners()
23 job = CoroutineScope(Dispatchers.Main).launch {
24 sceneController?.start(handler)
25 handler.startCapture()
26 }
27 }
28 }
29
30 override fun onPause() {
31 job?.cancel()
32 stopCapture()
33 }
34
35 override fun onDestroy() {
36 destroyScene()
37 }
38
39 (...)
40
41 private fun stopCapture() {
42 handler?.let {
43 if (it.isStarted) {
44 it.stopCapture()
45 sceneController?.stop()
46 }
47 }
48 handler?.destroy()
49 }
50
51 private fun destroyScene() {
52 sceneController?.destroy()
53 }
54
55 (...)
56
57}

Constructor parameters:

  • CaptureView - corresponding View interface. It has only one method (setResult(data: Result, code: Int) responsible for handling the results.
  • HandlerCreator - is responsible for creating FingerCaptureHandler with FingerCaptureOptions configured according to settings presented on screen above. For more details of FingerCaptureHandler, see finger capture configuration.
  • SceneView - View coming from UiExtensions library, which is useful for e.g. drawing of finger rectangles.
  • ImageStore- Class responsible for storing images after successful capture.
  • FingerAppSettingsStorage - Class for storing data from settings screen.

Function onResume is being called from onResume method of CaptureActivity.

  1. FingerCaptureHandler is created with HandlerCreator.
  2. CaptureDistanceRange is retrieved from FingerCaptureHandler. This step is not necessary - it allows to show optimal distance indicator for some calibrated devices.
  3. FingerCaptureSceneController is created. It handles data coming from FingerCaptureHandler and draw appropriate view which may be configured. For more details, see section below.
  4. Listeners for fingers are set up.
Kotlin
1private fun attachListeners() {
2 handler?.apply {
3 setFingerTrackingListener(fingerTrackingListener)
4 setFingerCaptureResultListener(fingerCaptureResultListener())
5 setFingerCaptureCurrentDistanceListener(fingerCaptureCurrentDistanceListener())
6 setFingerCaptureFeedbackListener(fingerCaptureFeedbackListener())
7 }
8 }

When using UiExtensions library, implementations of FingerCaptureTrackingListener, FingerCaptureCurrentDistanceListener and FingerCaptureFeedbackListener must pass data to SceneController.

Kotlin
1private val fingerTrackingListener = FingerCaptureTrackingListener { trackingInfo -> sceneController?.onTracking(trackingInfo) }
Kotlin
1private fun fingerCaptureCurrentDistanceListener() =
2 FingerCaptureCurrentDistanceListener { currentCaptureDistance ->
3 sceneController?.onCurrentCaptureDistanceChanged(
4 currentCaptureDistance.value
5 )
6 }
Kotlin
1private fun fingerCaptureFeedbackListener() =
2 FingerCaptureFeedbackListener { captureInfo -> sceneController?.onNewFeedback(captureInfo) }

Sample implementation of FingerCaptureResultListener is presented below:

Kotlin
1private fun fingerCaptureResultListener() = object : FingerCaptureResultListener {
2 override fun onCaptureFailure(
3 captureError: CaptureError,
4 biometricInfo: IBiometricInfo,
5 extraInfo: Bundle,
6 ) {
7 onFailure(CaptureFailure(captureError))
8 }
9
10 override fun onCaptureSuccess(
11 images: MutableList<MorphoImage>,
12 captureResult: FingerCaptureResult,
13 ) {
14 onSuccess(images)
15 }
16 }
17
18 private fun onFailure(captureFailure: CaptureFailure) {
19 sceneController?.captureFailure { }
20 val failureType = captureErrorToFailureType(captureFailure.captureError)
21 val result = Result.Failure(
22 failureType,
23 captureFailure.captureError
24 )
25 captureView.setResult(result, Activity.RESULT_OK)
26 }
27
28 private fun onSuccess(images: List<MorphoImage>) {
29 sceneController?.captureSuccess { }
30 stopCapture()
31 destroyScene()
32 val imageFilesNames: ArrayList<String> = arrayListOf()
33 images.dropLast(1).forEach {
34 imageStore.saveImage(it.biometricLocation.name, it.toJPEG())
35 imageFilesNames.add(it.biometricLocation.name)
36 }
37
38
39 val result = Result.Success(imageFilesNames, saveWSQ(images))
40 when (fingerSettings.captureFingerSettings.captureFinger) {
41 CaptureFinger.FOUR_FINGER_THUMB -> {
42 captureView.setResult(result, RESULT_CAPTURE_FINGER_THUMB_TUTORIAL)
43 }
44
45 CaptureFinger.THUMB_AFTER_FOUR_FINGER -> {
46 val resultFinger = settingsStorage.loadResultSuccess()
47 resultFinger?.let {
48 result.imageFileWSQNames.addAll(it.imageFileWSQNames)
49 result.imageFilesNames.addAll(it.imageFilesNames)
50 }
51 }
52
53 else -> {
54 captureView.setResult(result, Activity.RESULT_OK)
55 }
56 }
57 if (fingerSettings.captureFingerSettings.captureFinger == CaptureFinger.FOUR_FINGER_THUMB) {
58 captureView.setResult(result, RESULT_CAPTURE_FINGER_THUMB_TUTORIAL)
59 } else {
60 captureView.setResult(result, Activity.RESULT_OK)
61 }
62 }
63
64 (...)
65
66 private fun saveWSQ(images: List<MorphoImage>): ArrayList<String> {
67 val imageFilesWSQNames: ArrayList<String> = arrayListOf()
68 val wsqImagesWithoutHand =
69 images.dropLast(1).map { ImageUtils.toWSQ(it, 15.0f, 0.toByte(), 255.toByte()) }
70 for (i in wsqImagesWithoutHand.indices) {
71 val name = "wsq$i.wsq"
72 imageStore.saveImage(name, wsqImagesWithoutHand[i].buffer)
73 imageFilesWSQNames.add(name)
74 }
75 return imageFilesWSQNames
76 }
  1. Job for handling capture and scene is created. It will be cancelled after putting app to the background.

Setting up the capture with FingerCaptureView 

Finger capture can be also set up with FingerCaptureView component. Unfortunately, now it is only possible for Latent capture. In this case, the MVP approach is also used.

Kotlin
1interface Latent {
2 interface View {
3 fun setCapture(useCase: LatentUseCase, uiSettins: UISettings) // set up FingerCaptureView with given parameters
4 }
5
6 interface Presenter {
7 fun onResume() //called from Fragment onResume()
8 fun switchTorch() //to turn torch on/off
9 }
10}

Implementation of those interfaces is present in LatentCapturePresenter and LatentCaptureFragment.

Kotlin
1class LatentCapturePresenter(
2 private val view: Latent.View,
3 private val prepareImageStore: UserImageStore,
4 private val navigation: LatentNavigation
5) : Latent.Presenter {
6
7 companion object {
8 private const val TIMEOUT = 10
9 private const val imageKey = "LATENT"
10 }
11
12 private var torchController: TorchController? = null
13 (...)
14}
  • Latent.View - implementation of interface mentioned above.
  • UserImageStore - class which implements ImageStore. It holds fingerprint image.
  • LatentNavigation - handle navigation to success/error screens for Latent capture.
  • TorchController - responsible for turning torch on/off. This interface is implemented within CaptureSDK and can be retrieved from TorchListener.
Kotlin
1(...)
2 private val torchListener = TorchListener {
3 torchController = it
4 }

Capture is set up in onResume method of Presenter:

Kotlin
1(...)
2 private val latentResult = object : LatentResultListener {
3 override fun onError(error: Error) {
4 runCatching {
5 navigation.goToError()
6 }
7 }
8
9 override fun onSuccess(result: LatentSuccess) {
10 prepareImageStore.saveImage(imageKey, result.image.data) // saves images in order to show them on result screen
11 navigation.goToSuccess(imageKey)
12 }
13 }
14
15 (...)
16
17 override fun onResume() {
18 val latentUseCase =
19 LatentUseCase(
20 LatentCaptureSettings(TIMEOUT),
21 LatentCaptureListeners(result = latentResult, torchListener = torchListener)
22 )
23 val uiSettins = UISettings(fingerCaptureSettings {
24 scene {
25 progressBar {
26 show = true
27 }
28 distance {
29 showOptimalDistanceIndicator = true
30 }
31 }
32 })
33 view.setCapture(latentUseCase, uiSettins)
34 }
  1. First of all, LatentUseCase is created. It takes LatentCaptureSettings (currently those setings contain only timeout) and listeners as parameters.
  2. UiSettings are created. This class wraps FingerCaptureSettings from UiExtensions library. For more details, see preview configuration
  3. setCapture is called on view and the capture starts.

Torch switching is pretty simple and it requires only calling setTorch on TorchController:

Kotlin
1override fun switchTorch() {
2 torchController?.let {
3 it.setTorch(getNegativeTorch(it.getTorch()))
4 }
5 }
6
7 private fun getNegativeTorch(torch: Torch) = when (torch) {
8 Torch.ON -> Torch.OFF
9 Torch.OFF -> Torch.ON
10 }

Capture Layouts 

Layout for finger/thumb capture is defined in the file activity_capture.xml.

XML
1<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
2 xmlns:app="http://schemas.android.com/apk/res-auto"
3 android:layout_width="match_parent"
4 android:layout_height="match_parent">
5
6 <androidx.appcompat.widget.Toolbar
7 android:id="@+id/toolbar"
8 style="@style/Toolbar"
9 app:layout_constraintStart_toStartOf="parent"
10 app:layout_constraintTop_toTopOf="parent" />
11
12<!--Custom component provided by UI-EXTENSIONS library. Is responsible for capturing biometric data -->
13
14 <com.idemia.biometricsdkuiextensions.ui.scene.view.SceneView
15 android:id="@+id/sceneSurface"
16 android:layout_width="0dp"
17 android:layout_height="0dp"
18 app:layout_constraintBottom_toBottomOf="parent"
19 app:layout_constraintLeft_toLeftOf="parent"
20 app:layout_constraintRight_toRightOf="parent"
21 app:layout_constraintTop_toBottomOf="@id/toolbar"
22 app:showBorders="false" />
23
24</androidx.constraintlayout.widget.ConstraintLayout>

Layout for latent capture is defined in fragment_latent_capture.xml

XML
1<?xml version="1.0" encoding="utf-8"?>
2<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
3 xmlns:app="http://schemas.android.com/apk/res-auto"
4 android:layout_width="match_parent"
5 android:layout_height="match_parent">
6
7<!--Custom component for configuring capture -->
8
9 <com.idemia.capture.finger.api.FingerCaptureView
10 android:id="@+id/fingerCaptureView"
11 android:layout_width="match_parent"
12 android:layout_height="match_parent" />
13
14<!--Button responsible for turning torch on/off. It is handled in LatentCaptureFragment -->
15
16 <Button
17 android:id="@+id/torchController"
18 style="@style/Idemia.Button.Violet"
19 android:layout_width="wrap_content"
20 android:layout_height="wrap_content"
21 android:text="@string/torch_controller"
22 android:layout_marginBottom="@dimen/margin_16"
23 app:layout_constraintBottom_toBottomOf="parent"
24 app:layout_constraintEnd_toEndOf="parent"
25 app:layout_constraintStart_toStartOf="parent" />
26
27</androidx.constraintlayout.widget.ConstraintLayout>

Preview configuration 

UiExtensions library let you change some parameters of the view that will be displayed after starting the capture. It is done by passing FingerCaptureSettings to FingerCaptureSceneController constructor or setUp method of FingerCaptureView when using new approach. Example of the settings:

Kotlin
1private fun settings(range: CaptureDistanceRange?) = fingerCaptureSettings {
2 scene {
3 background {
4 colorEnd = Color.parseColor(Colors.idemia_blue)
5 colorStart = Color.parseColor(Colors.idemia_blue)
6 }
7 rectangle {
8 color = Color.parseColor(Colors.white)
9 strokeWidth = 5f
10 cornerRadius {
11 rx = 90f
12 ry = 90f
13 }
14 }
15 feedback {
16 showFeedback = true
17 if (thumbScanning()) {
18 feedbackStringMapping = ::getThumbMapping
19 }
20 }
21 distance {
22 distanceRange = range?.let {
23 DistanceRange(it.rangeMin, it.optimalMin, it.optimalMax, it.rangeMax)
24 }
25 showOptimalDistanceIndicator = range != null
26 }
27 previewScale {
28 scaleX = 1.0f
29 scaleY = 1.0f
30 }
31 }
32 }
33
34 private fun thumbScanning() =
35 settingsStorage.load().captureFingerSettings.captureFinger == CaptureFinger.THUMB
36
37 private fun getThumbMapping(captureInfo: FingerCaptureInfo): String {
38 return when (captureInfo) {
39 FingerCaptureInfo.TOO_CLOSE -> "Move your thumb further"
40 FingerCaptureInfo.TOO_FAR -> "Move your thumb closer"
41 FingerCaptureInfo.OPTIMAL -> ""
42 }
43 }

A more detailed explanation of the possibilities these settings provide is available in Integrating with the UI Extension section.

Summary screen 

Screen below is displayed after successful acquisition of fingers/thumb or when Latent capture is finished.

SampleAppIntGuide finger summary screen

It allows to share captured images in .jpg and .wsq formats. Logic responsible for this can be found in DataShare.kt (see Sharing images for more details). Classes related with this screen are SuccessFragment and SuccessPresenter.

Splash Screen 

SampleAppIntGuide splash screen

This screen is displayed when the application is loaded and it can easily be customized with brand colors.

All graphical settings can be found in the layout file: fragment_splash.xml.

Note: It's also a convenient place to put any initial checks and settings like: requesting permissions or activating a license.

The sample app presenter logic behind this screen takes care of requesting a license. If this operations is successful, then it forwards the end-user to the main screen.

Splash Screen Presenter 

The Sample App uses the MVP approach which simplifies code testing by decoupling decision logic from the UI view.

For example, when the onViewCreated method in a fragment is called by the framework, then the fragment will inform the presenter that it is started, which triggers the license activation procedure which is described in License Activation.

Kotlin
1//fragment
2private val presenter: Splash.Presenter by lazy {
3 FingerSplashPresenter(
4 this,
5 SplashNavigation(findNavController()) { exitApp() },
6 LicenseServiceProvider(requireContext()).makeLicenceManager(),
7 )
8 }
9
10override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
11 super.onViewCreated(view, savedInstanceState)
12 presenter = providePresenter()
13 presenter.onCreate()
14 }
15
16
17//presenter
18 override fun onCreate() {
19 fetchLicence()
20 }
21
22 private fun fetchLicence() {
23 if (fetchingLicenseAlreadyStarted()) {
24 return
25 }
26 licenseJob = ioScope.launch {
27 val licencePreparationResult = licenseService.prepareLicence()
28 withContext(Dispatchers.Main) {
29 when (licencePreparationResult) {
30 is LicenseActivationSuccess -> navigation.goToMenu()
31 is LicenseActivationError -> handleError(licencePreparationResult)
32 }
33 }
34 }
35 }
36
37 private fun fetchingLicenseAlreadyStarted() = licenseJob?.isActive == true

Camera pemrission screen 

If you didn't grant neccessary permission in the the system settings, or permission were granted and revoked, before starting the capture you will see following screen:

SampleAppIntGuide finger permission screen

Logic behind this screen is presented below:

Kotlin
1class PermissionsFragment: Fragment() {
2
3 private val presenter: Permissions.Presenter by lazy {
4 val args: PermissionsFragmentArgs by navArgs()
5 PermissionsPresenter(
6 AppPermissionProvider(this, FingerPermissionInfo()),
7 PermissionsNavigator(findNavController()),
8 args
9 )
10 }
11
12 private var _binding: FragmentPermissionBinding? = null
13
14 private val binding get() = _binding!!
15
16 override fun onCreateView(
17 inflater: LayoutInflater,
18 container: ViewGroup?,
19 savedInstanceState: Bundle?
20 ): View {
21 _binding = FragmentPermissionBinding.inflate(layoutInflater, container, false)
22 return binding.root
23 }
24
25 override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
26 super.onViewCreated(view, savedInstanceState)
27 binding.acceptButton.setOnClickListener { presenter.askForPermission() }
28 }
29
30 override fun onRequestPermissionsResult(
31 requestCode: Int,
32 permissions: Array<out String>,
33 grantResults: IntArray
34 ) {
35 presenter.onRequestPermissionsResult(requestCode, grantResults)
36 }
37
38 override fun onDestroyView() {
39 super.onDestroyView()
40 _binding = null
41 }
42}
Kotlin
1class PermissionsPresenter(
2 private val permission: PermissionProvider,
3 private val permissionsNavigator: PermissionsNavigator,
4 private val args: PermissionsFragmentArgs
5) : Permissions.Presenter {
6
7 override fun askForPermission() {
8 permission.requestPermissions()
9 }
10
11 override fun onRequestPermissionsResult(requestCode: Int, grantResults: IntArray) {
12 if (requestCode == AppPermissionProvider.PERMISSION_REQUEST_CODE && grantResults.isNotEmpty()) {
13 if (permission.permissionsGranted()) {
14 onPermissionGranted()
15 } else {
16 permissionsNavigator.goToPermissionDeclined()
17 }
18 }
19 }
20
21 private fun onPermissionGranted() {
22 if(isLatent()) {
23 permissionsNavigator.goToLatent()
24 } else {
25 permissionsNavigator.goToScan()
26 }
27 }
28
29 private fun isLatent() = args.isLatent
30
31}

After tapping "I accept button" askForPermission() method from Presenter will be invoked and you will see pop-up with request to grant necessary permissions. Then onRequestPermissionsResult is called within the fragment and request result is passed to presenter method with the same name. If you grant permissions, then application will move forward and capture will be started. Otherwise, following screen will be shown:

SampleAppIntGuide finger permission revoked screen

Here you can only navigate to system settings of the application to grant permissions.

Requesting Permissions 

To request permission, a component com.idemia.smartsdk.sample.common.permission.AppPermissionProvider from the Commons module is used.

If the Android system version is below M then all permissions will be required statically according to the declaration in the manifest file.

XML
1<uses-permission android:name="android.permission.INTERNET" /\>
2<uses-permission android:name="android.permission.CAMERA"/\>

From M-version and above, Android supports dynamic permissions requests.

Java
1fragment.requestPermissions(PERMISSIONS_TO_REQUEST, PERMISSION_REQUEST_CODE)

AppPermissionProvider has three dependencies but all three have default implementations provided by a companion object. The purpose of this construction is to make testing of this possible by injecting mock dependencies during tests.

Kotlin
1class AppPermissionProvider(
2 @RequiresApi(Build.VERSION_CODES.M) private val checkPermission: PermissionChecker,
3 private val triggerPermissionRequest: PermissionRequester,
4 private val versionInfoProvider: VersionInfoProvider = StaticVersionInfo()
5) : PermissionProvider {
6
7//In companion object
8operator fun invoke(f: Fragment): AppPermissionProvider =
9 AppPermissionProvider(
10 @RequiresApi(Build.VERSION_CODES.M) {
11 (f.activity as Activity).checkSelfPermission( //permission checker
12 it
13 )
14 },
15 { f.requestPermissions(PERMISSIONS_TO_REQUEST, PERMISSION_REQUEST_CODE) } //permission requester
16 )
17}

When permissions are provided, the license activation is launched.

Sharing images 

It is possible to share images after successful acquisition. To accomplish this, the DataShare interface is used.

Kotlin
1interface DataShare {
2 fun shareMultiple(files: List<Uri>, contentType: String)
3 fun shareMultipleFiles(files: List<String>, contentType: String, authority: String)
4 fun shareMultipleImages(images: List<Bitmap>)
5}

Currently it has only one implementation:

Kotlin
1class AndroidDataShare(val activity: Activity) : DataShare {
2
3 override fun shareMultiple(uris: List<Uri>, contentType: String) {
4 val filesToShare = ArrayList<Uri>().apply { addAll(uris) }
5
6 val intent = Intent().apply {
7 action = Intent.ACTION_SEND_MULTIPLE
8 type = contentType
9 flags = Intent.FLAG_GRANT_READ_URI_PERMISSION or Intent.FLAG_GRANT_WRITE_URI_PERMISSION
10 putParcelableArrayListExtra(Intent.EXTRA_STREAM, filesToShare)
11 }
12
13 activity.startActivity(Intent.createChooser(intent, "Share SampleApp data"))
14 }
15
16 override fun shareMultipleFiles(files: List<String>, contentType: String, authority: String) {
17 val uris = files.map { FileProvider.getUriForFile(activity, authority , File(activity.filesDir, it)) }
18 shareMultiple(uris, contentType)
19 }
20
21 override fun shareMultipleImages(images: List<Bitmap>) {
22 val mediaSaver: MediaSaver = getMediaSaver(activity.contentResolver)
23 val uris = mediaSaver.saveBitmaps(images)
24 shareMultiple(uris, ContentType.JPG.value)
25 }
26}

Both shareMultipleFiles and shareMultipleImages creates list of Uri and passes it to shareMultiple. Due to changes in media saving across different versions of the Android system, saving bitmap logic in shareMultipleImages is hidden below MediaSaver interface:

Kotlin
1abstract class MediaSaver {
2
3 companion object {
4 private const val IMAGE_FILENAME = "Image"
5 private const val JPG_SUFFIX = ".jpg"
6 }
7
8 abstract fun saveBitmaps(images: List<Bitmap>): List<Uri>
9
10 protected fun getImageFileName() = "${IMAGE_FILENAME}_${System.currentTimeMillis()}$JPG_SUFFIX"
11}

This interface has 2 implementations - one for Android below 10 (MediaSaverBelowAndroidQ) and the seconds for higher versions of OS (MediaSaverAndroidQAndAbove).

Document Sample Application 

This is the top-tier application which uses both the License Manager and UserManager components which depend on Biometric Capture SDK.

The Document Sample Application provides functionality for document scanning, and can easily be used as a template for your own solution.

The following describes each application screen and how it was implemented. There is an assumption that a reader has sufficient technical knowledge about Android development to understand code samples.

Content 

  1. SampleAppIntGuide splash screen
  2. SampleAppIntGuide Tutorial scanPassport

Document Sample App Settings 

SampleAppIntGuide Doc sample app settings

To access the application's settings, click on the round button with the wrench icon located in the bottom right corner of the screen.

This button is accessible on each screen except the scanning page.

Settings options

SampleAppIntGuide Doc settings

There are four options that an end-user can change:

  • Document overlay - decides whether the display overlay with the document boundaries is to be scanned

  • Static overlay - decides which display colored overlay will be placed within the document overlay

  • Time in seconds before the capture timeout - chooses time in seconds for which capture will stand before timeout error

  • Skip tutorial - decides whether a tutorial will be shown before each scanning operation

Document Sample App Splash Screen 

Splash Screen

SampleAppIntGuide splash screen

This screen is displayed when the application is loaded. It can be easily customized it with brand colors.

All graphical settings can be found in the layout file: fragment_splash.xml.

Note: This is also a convenient place to put any initial checks and settings like requesting permissions or activating a license.

The sample app logic behind this screen takes care of requesting the license and requesting the proper permissions. If those operations are successful then it forwards the end-user to the tutorial page.

The flow is illustrated below:

SampleAppIntGuide sample app logic

SampleApp uses the MVP approach which simplifies code testing by decoupling decision logic from the UI view.

In the splash screen fragment, the presenter is responsible for requesting permissions and fetching a license. It is triggered from lifecycle methods of the SlashFragment.

Java
1//Fragment
2override fun onStart() { //triggered by Android System
3 super.onStart()
4 scopeIO.launch {
5 presenter.onStart()
6 }
7}
8
9//Presenter
10suspend fun onStart() { //triggered by Fragment
11 if (permission.permissionsGranted()) {
12 fetchLicence()
13 } else {
14 permission.requestPermissions()
15 }
16}

For now, ignore the scopeIO.launch instruction. It's part of the official Kotlin framework called Coroutines. It was designed to simplify the execution of concurrent operations.

You can learn more about it on the official support page.

When the onStart method is called, and proper permissions are granted, the license activation is started. The license fetching procedure is described in License Activation.

Requesting Permissions 

To request permissions, a component com.idemia.smartsdk.sample.common.permission.DocumentAppPermissionProvider is used. If the end-user's Android system is below M-version then all permissions will be required statically, according to the declaration in the manifest file.

XML
1<uses-permission android:name="android.permission.INTERNET" />
2<uses-permission android:name="android.permission.CAMERA"/>

From M-version and above, Android supports dynamic permissions requests:

Java
1fragment.requestPermissions(PERMISSIONS_TO_REQUEST, PERMISSION_REQUEST_CODE)

DocumentAppPermissionProvider expects two functions during initialization. One, which will be called when the request for permission has to be triggered, and the second one, for checking if proper permissions were granted.

You can check usage of this component in SplashPresenter.

The first presenter checks if proper permissions are granted and if not, then the request procedure is triggered.

Java
1suspend fun onStart() {
2 if (permission.permissionsGranted()) {
3 fetchLicence()
4 } else {
5 permission.requestPermissions()
6 }
7 }

Then we try to repeat an operation but, if permissions are not yet given, then the application can not proceed and exits.

Java
1suspend fun onRequestPermissions() {
2 if (permission.permissionsGranted()) {
3 fetchLicence()
4 } else {
5 view.exitApp()
6 }
7 }

Constructions of the two mentioned functions can be found in the companion object of the class.

Java
1companion object {
2 private const val PERMISSION_REQUEST_CODE = 11
3 private val PERMISSIONS_TO_REQUEST: Array<String> = arrayOf(Manifest.permission.CAMERA)
4
5 operator fun invoke(f:Fragment):DocumentAppPermissionProvider = DocumentAppPermissionProvider(
6 @RequiresApi(Build.VERSION_CODES.M) { (f.activity as
7Activity).checkSelfPermission(it) },
8 { f.requestPermissions(PERMISSIONS_TO_REQUEST, PERMISSION_REQUEST_CODE) }
9 )
10 }

Some samples are also in the DocumentAppPermissionProviderTest unit test.

Document Sample App Tutorial 

Tutorial

These are a collection of screens that present the tutorial for scanning specific document types.

Tutorial page for an ID

SampleAppIntGuide Tutorial scanID

Tutorial page for a passport

SampleAppIntGuide Tutorial scanPassport

Tutorial page for a driver's license

SampleAppIntGuide Tutorial scanDL

These are simple animations that show the end-user how to properly scan a chosen document or barcode.

The application's architecture is MVP with a single activity where each view is presented by a separate fragment.

This approach simplifies inter-screen navigation which is composed with Navigation Component.

The core code for navigation is placed under the navigation package.

A representative navigation graph can be found in the file navigation_graph.xml while in design mode.

SampleAppIntGuide navigation graph

Each fragment that has any buttons that initialize a navigation are handled in the same way. Fragments have internal classes that map button IDs to the proper navigation action.

Tutorial Configuration

There are two cases when a tutorial is not displayed and an end-user is navigated directly to the scanner page.

  • In the first case, the tutorial page is skipped when the given document type was not provided with a tutorial animation file.

    Files with tutorial animations are stored in resources under the raw package.

  • An end-user may also manually disable a tutorial in the settings page. Whole tutorial content (i.e., title or animation file) is stored inside the TutorialContent class.

As it is shown in code below, the first mentioned case is the check on the menu page just after the end-user has chosen one of the document options:

Code Block 1 TutorialNavigationFlow.kt

Java
1// Returns true if given document has tutorial animation file
2private fun hasTutorialAnimation(documentType: DocumentType): Boolean = getTutorialAnimationFileId(documentType) != TutorialContentProvider.DOCUMENT_OPTION_WITHOUT_TUTORIAL
3
4// Each document has given animation file id as an argument with default
5value equal 0
6private fun getTutorialAnimationFileId(documentType: DocumentType): Int = TutorialContentProvider.tutorialContent(documentType).animationFileId

Face Sample Application 

This is the top-tier application which uses both License Manager and UserManager components which depends on Biometric Capture SDK.

Face Sample Application can easily be used as a template for an integrator's own biometric application.

The following describes each application screen and explains how it was implemented. It is assumed that the reader has sufficient technical knowledge about Android development to understand code samples.

Face Sample App Liveness Challenge 

This functionality can be entered directly from the splash screen or after watching the tutorial.

SampleAppIntGuide liveness challenge

CR2D 

This challenge mode requires multiple dots to connect on the screen.

SampleAppIntGuide liveness dots

The logic responsible for this challenge mode can be found in the class: CR2DChallengeFragment which delegates operations to CR2DChallengePresenter.

Java
1builder.captureMode(BioCaptureMode.TRACK_FACE_CR2D_MEDIUM)
2 .timeout(appSettings.timeout.toLong())
3 .cr2dConfiguration(Cr2dConfigurationPath(CR2D_POINTS))
4 .onCurrentUpdated { sceneController.update(it) }
5 .onTargetUpdated { sceneController.update(it) }
6 .onFailure(::onCaptureFailure)
7 .onSuccess(::onCaptureSuccess)
8 .onTracking(BioCaptureTrackingListener { sceneController.onTracking(it) })
9 .onFeedback(BioCaptureFeedbackListener(::onFeedbackReceived))
10 .onTargetsConditionUpdated { targetCount, targetStability ->
11 challengeView.hideFaceOutline()
12 sceneController.update(targetCount, targetStability)
13 }
14
15if (appSettings.useIllumination) {
16 builder.illumination(this\@CR2DChallengePresenter)
17}
18cR2DPreview = builder.build()

Layout 

Layout is defined in the file fragment_challenge_cr2d.xml.

XML
1<!--Custom component provided by UI-EXTENSIONS library. Is responsible for capturing biometric data -->
2 <com.idemia.biometricsdkuiextensions.ui.scene.scene.SceneView
3 ...
4 />
5
6 <ImageView
7 android:id="@+id/faceOverlay"
8 .../>
9
10 <TextView
11 android:id="@+id/faceOverlayText"
12 ...
13 />
14
15<!--General text field which will be used to display information **for**
16 user-->
17 <TextView
18 android:id="@+id/captureFeedback"
19 ...
20 />

Preparation 

Before performing the challenge, initialize the capture screen. In the sample app it's the responsibility of the CR2DCaptureProcess.ProcessBuilder component which describes the meaning of each parameter.

Java
1builder.captureMode(BioCaptureMode.TRACK_FACE_CR2D_MEDIUM)
2 .timeout(appSettings.timeout.toLong())
3 .cr2dConfiguration(Cr2dConfigurationPath(CR2D_POINTS))
4 .onCurrentUpdated { sceneController.update(it) }
5 .onTargetUpdated { sceneController.update(it) }
6 .onFailure(::onCaptureFailure)
7 .onSuccess(::onCaptureSuccess)
8 .onTracking(BioCaptureTrackingListener { sceneController.onTracking(it) })
9 .onFeedback(BioCaptureFeedbackListener(::onFeedbackReceived))
10 .onTargetsConditionUpdated { targetCount, targetStability ->
11 challengeView.hideFaceOutline()
12 sceneController.update(targetCount, targetStability)
13 }
14
15 if (appSettings.useIllumination) {
16 builder.illumination(this@CR2DChallengePresenter)
17 }
18 cR2DPreview = builder.build()

Challenge Resource Management 

During the challenge procedure, multiple resources are used. Warning: Be very careful to remember to release each of them.

As in the rest of the sample application, this functionality is implemented with the view/presenter approach and the presenter is of the type CR2DChallengePresenter. It's a great example to understand the proper order of instructions in the activity which uses the presenter.

Here is the order of all the operations which have to be performed:

  1. Presenter: prepareFaceCapture

    • builds the CR2DPreview object
  2. Presenter: startFaceCapture

    • start CR2DPreview
    • draws face outline
    • starts CR2DCapture
    • starts SceneController
  3. Presenter: onPause

    • stops SceneController
    • stops CR2DPreview
    • stops CR2DCapture
  4. Presenter: release

    • destroys SceneController
    • destroys CR2DCapture

Prepare Face Capture in Details 

Presenter uses JoinThePointsSceneController from UIExtensions. Configure callbacks to this controller through CR2DCaptureProcess.ProcessBuilder.

Property modifying SceneController
Description
onCurrentUpdated { sceneController.update(it) }
onTargetUpdated { sceneController.update(it) }
.onTargetsConditionUpdated { targetCount, targetStability -> challengeView.hideFaceOutline() sceneController.update(targetCount, targetStability) }d
General Property
Description
captureMode(BioCaptureMode.TRACK_FACE_CR2D_MEDIUM)You can choose from among three different: - TRACK_FACE_CR2D_*, TRACK_FACE_LIVENESS_* and INGERPRINT_*

Face Sample App Settings 

The exact flow of the tutorial screens depends on the settings you can choose. To enter the settings screen choose menu (the three little dots) in the right top corner of the screen.

SampleAppIntGuide face sample app

Choose which challenge method to use (CR2D - for joining points on the screen and SLAM for face scan).

SampleAppIntGuide face sample settings

Settings are backed by SettingsFragment.kt and SettingsPresenter.kt. The last one saves the chosen values by using SettingStorage.

Java
1interface SettingsStorage {
2 fun load(): FaceAppSettings
3 fun save(settings: FaceAppSettings)
4}

Face Sample App Splash Screen 

SampleAppIntGuide splash screen

This screen is displayed when the application is loaded and it can easily be customized with brand colors.

All graphical settings can be found in the layout file: fragment_splash.xml.

Note: It's also a convenient place to put any initial checks and settings like: requesting permissions or activating a license.

The sample app presenter logic behind this screen takes care of requesting a license and requesting proper permissions. If those operations are successful, then it forwards the end-user to the tutorial page.

The flow is illustrated in the following picture:

SampleAppIntGuide presenter logic

Splash Screen Presenter 

The Sample App uses the MVP approach which simplifies code testing by decoupling decision logic from the UI view.

For example, when the onViewCreated method in a fragment is called by the framework, then the fragment will inform the presenter that it is started, which triggers the license activation procedure which is described in License Activation.

Java
1//fragment
2override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
3 super.onViewCreated(view, savedInstanceState)
4 presenter = providePresenter()
5 presenter.onCreate()
6 }
7
8
9//presenter
10 override fun onCreate() {
11 Log.d(TAG, "onStart")
12 if (permissionProvider.permissionsGranted()) {
13 startFetchLicense()
14 } else {
15 requestPermissions()
16 }
17}

The main advantage of this approach is that presenter is completely decoupled from the Android framework, which simplifies testing.

Requesting Permissions 

To request permission, a component com.idemia.smartsdk.sample.common.permission.AppPermissionProvider from the Commons module is used.

If the Android system version is below M then all permissions will be required statically according to the declaration in the manifest file.

XML
1<uses-permission android:name="android.permission.INTERNET" /\>
2<uses-permission android:name="android.permission.CAMERA"/\>

From M-version and above, Android supports dynamic permissions requests.

Java
1fragment.requestPermissions(PERMISSIONS_TO_REQUEST, PERMISSION_REQUEST_CODE)

AppPermissionProvider has three dependencies but all three have default implementations provided by a companion object. The purpose of this construction is to make testing of this possible by injecting mock dependencies during tests.

Java
1class AppPermissionProvider(
2 @RequiresApi(Build.VERSION_CODES.M) private val checkPermission: PermissionChecker,
3 private val triggerPermissionRequest: PermissionRequester,
4 private val versionInfoProvider: VersionInfoProvider = StaticVersionInfo()
5) : PermissionProvider {
6
7//In companion object
8operator fun invoke(f: Fragment): AppPermissionProvider =
9 AppPermissionProvider(
10 @RequiresApi(Build.VERSION_CODES.M) {
11 (f.activity as Activity).checkSelfPermission( //permission checker
12
13 it
14 )
15 },
16 { f.requestPermissions(PERMISSIONS_TO_REQUEST, PERMISSION_REQUEST_CODE) } //permission requester
17 )

When permissions are provided, the license activation is launched.

Fetching License with Coroutines 

The component responsible for triggering and fetching the license procedure is SplashPresenter, which is defined in the Commons module. It uses navigator abstraction, which is defined separately by each specific application.

In the SplashPresenter is a method which uses the License Activation Module. Because one of the scenario's application has to connect with the remote server, this operation needs to start in a separate thread to not freeze the UI. This problem was solved with the usage of Coroutines framework.

In the sample below, first trigger licenseService in IO context which provides a thread for input/output operation. Afterwards, switch back to UI context.

Kotlin
1fun fetchLicence() {
2 licenseJob = scopeIO.launch {
3 try {
4 licenseService.prepareLicence()
5 navigator.navigateTo()
6 } catch (exception: LkmsNetworkException) {
7 handleException(exception)
8 }
9 }
10}

When the license is restored and all necessary permissions granted, the end-user will be presenter with a series of tutorials.

Tutorial 

The following is a collection of simple screens which explain the application's functionality:

Screen 1

SampleAppIntGuide UserHelp step1

Screen 2

SampleAppIntGuide UserHelp centerHead

Screen 3

SampleAppIntGuide UserHelp light

Tutorial

SampleAppIntGuide UserHelp dotTrack

Navigation between those screens is backed by a set of fragments placed in the com.idemia.smartsdk.sample.tutorial.fragments package. Transitions between the tutorial sub pages are implemented with usage of the official Navigation Component from Google.

For a nice graphical overview, open the navigation_cr2d_tutorial.xml file or navigation_slam_tutorial.xml while in design mode.

Note: The picture below is used just for presentation purposes. Current navigation in the app can be different.

SampleAppIntGuide navigation slam

Tutorial Configuration 

Tutorial navigation is controlled by settings which are described in in the next section. Seeing the tutorial is optional.

Optionally, challenge mode can be chosen, which will be used in registration and authentication.

Having those choices as an input, it's delegated to TutorialPresenter to handle navigation properly.

Java
1//This part handles tutorial settings choice
2override fun configureTutorial() {
3 if (!appSettings.showTutorial) {
4 tutorialView.setStartingDestination(R.id.settingsActivity)
5 } else {
6 chooseChallengeMode()
7 }
8}
9
10//if you checked tutorial field in settings then the second condition
11handles challenge mode.
12
13private fun chooseChallengeMode() {
14 when (appSettings.mode) {
15 ChallengeMode.CR2D -> {
16 tutorialView.setStartingDestination(
17 R.id.firstStepFragment,
18 bundleOf(
19 TutorialActivity.USE_ILLUMINATION to appSettings.useIllumination
20 )
21 )
22 }
23 ChallengeMode.SLAM ->
24 tutorialView.setStartingDestination(R.id.SLAMStepFragment)
25 }
26}

In the activity, notice the use of the standard navigation API.

Java
1override fun setStartingDestination(destinationResId: Int, arguments: Bundle) {
2 val graph = navController.navInflater.inflate(R.navigation.navigation_tutorial)
3 graph.startDestination = destinationResId
4 navController.setGraph(graph, arguments)
5}

It's possible to use a different navigational approach within an integrator's own application.

License Activation 

To use Biometric Capture SDK, provide the following pre-configuration parameters:

LicenseServerUrl, ApiKey and ProfileID.

Values of those parameters will be determined during external business processes. Please contact your Business Unit Manager for more information.

Pass the aforementioned parameters to the LicenseManager module which is the main component responsible for retrieving a license.

  • In SampleApps notice the usage of this module in SplashPresenter where the license service is used during the onCreate phase.

The module entry point is the LkmsLicenseService class which tries to restore the license. In the case this step fails, then it tries to recover by fetching a new license.

LkmsLicenseService has only one dependency which you need to use in order to configure the fetching license procedure:

Java
1private val licenseManager: LkmsLicenseManagerWrapper,

LicenseManagerWrapper Configuration 

This component has two initial parameters:

  • Android Context can be obtained directly from the Android framework

  • License Manager can be obtained by LkmsLicenseManagerProvider which just acts as a proxy for a call using the static method in the LicenseManager

LicenseService Usage 

When all dependencies are provided, you can use an implementation of LicenseService, which is called LkmsLicenseService.

Java
1override suspend fun prepareLicence(): LicenseActivationResult {
2 return licenseManager.activate()
3}

You can see that licenseManager trying to fetch new license and activate or if license was fetched before it will activate license without fetching new one.

Understand API through Tests 

In the sample app you can find very useful unit tests which maybe helpful in API exploration.

For license activation please go to the file LkmsLicenseServiceTest. Most of the test cases should be self descriptive. Below, you will find short descriptions to particular test scenarios.

  • should return error after fail to activate licence - calling prepareLicense on LicenseService trigger on licenseManager activation but if LicenseManager can not fetch or activate license it return LicenseActivationError

  • should return LicenseActivationSuccess after successful license activation - calling prepareLicense trigger on licenseManager activation. If can fetch and activate license with success it return LicenseActivationSuccess

For more information about LicenseManger see integration guide

User Management 

This module is responsible for the general management of users of the sample app.

It allows for a user account to be stored, retrieved, or deleted.

Application User and SDK User 

When reading code you'll come across the following classes and interfaces: User, IUser, and AppUser.

  • IUser - Interface provided to the app by Biometric Capture SDK

  • User - Default implementation of IUser. Maintained by Biometric Capture SDK.

  • AppUser - Describes user data inside the application. It consists of:

    • Biometric Capture SDK User data - just name and id
    • Templates - Biometric Data captured during enrollment
    • Image - it is an image captured during enrollment
SampleAppIntGuide app v sdk

UserService Configuration 

This component has three dependencies:

  • UserStore implemented by BiosUserStore - responsible for storing IUser information. Is dependent only on MorphoBioStoreDBWrapper which only wraps static calls to Biometric Capture SDK.

  • UserTemplateStore implemented by BioUserTemplateStore - it also is only dependent on MorphoBioStoreDBWrapper. Easy to initialize.

  • ImageStore implemented by UserImageStore - this component saves images made by the user during enrollment. They are saved directly on a hard drive. That's why ImageStore depends on FileStore. This store is implemented by AndroidInternalFileStore which just depends on an Android context.

UserService Usage 

The component has a simple interface:

  • saveUser(appUser) - stores userData, templates, and pictures created during enrollment.

  • remove/removeAll - removes a user from the system. In case of any error you will see an error in the logs.

  • listUsers - loads all data from every "store" for each user.

Understand API through Tests 

Because "store" components in this module mainly forward calls to Biometric Capture SDK, the tests defined below explain which internal methods needs to be called in order to obtain the expected behavior.

  • BioUserStoreTest - shows which methods on BioStoreDBWrapper will be called as an effect of the BioUSerStore method calls.

  • BioUserTemplateStoreTest - similar to above but focuses on templates.

  • UserImageStoreTest - this one is for images.

Usages 

Below are some specific usages to help you to better understand the API module.

SettingsPresenter - here you can remove all users to start enrollment all over again:

Java
1private fun removeUsers() {
2 runBlocking {
3 userManager.removeAll()
4 }
5}