Skip to main content

Selfie Capture

About this guide

This guide is designed to assist you in the integration with the Flutter SDK in a fast and easy way. On this page, you find some key concepts, implementation examples, as well as how to interact with the Biometric Engine REST APIs.

Please note

Please note that this guide focuses on the image capture process. For more information about the REST APIs of Unico Check, please refer to REST API Reference guide.

Following this guide, you are able to:

  • Learn how to open user's camera and capture an image;
  • Learn how to link the parameters returned by the SDK with the REST APIs;
  • Learn how to deal with the data returned by the REST API;

Before you begin

The step-by-step instructions on Getting started guide to set up your account, get your API key and install the SDK must be completed. It is also recommended that you check the available features of this SDK on the Overview page.

Available resources

This SDK offers a component that allows you to capture optimized images in your app, displaying your users a silhouette that helps them to get correctly positioned for the image capture.

You can offer one of the following Selfie Capture modes in your app:

Manual Capture

The SDK displays a frame to help users to correctly position their faces. The users are then responsible for capturing the image by clicking on a button (also provided by the SDK).

The SDK does not perform any kind of validation of what is being captured. If the captured image does not have what is considered a biometrically valid face, the JWT generated by the SDK is rejected by the Biometric Engine REST API.

Captura Manual

Automatic Capture

The SDK automatically identifies the face of the users through the computer vision algorithms and helps them to position themselves correctly within the capture area. Once correctly positioned, the image is captured automatically.

Problems when sending the JWT to the biometric engine APIs are minimized as this option helps the user to frame their face in the capture area.

Captura Manual

Smartlive with interaction FaceTec

In this kind of experience, users are instructed to perform some simple movements during the image capture. Those movements are then verified by some computer vision algorithms, in order to ensure that users are really in front of the phone. By requesting users to move in front of the camera, this kind of experience adds an extra security layer against frauds.

As in the Automatic Capture mode, the image here is captured without users pressing any button. This option can also dramatically reduce problems when sending the JWT to the Biometric Engine REST API.

Smartlive with interaction FaceTec activation

This functionality must be activated inside the Unico Customer Portal, as explained in this article.

Implementation

Follow the steps below to have the full potential of the SDK embedded in your app.

  1. Initialize the SDK

    First, you have to instantiate the builder, through the interface UnicoCheckBuilder.

    class _MyHomePageState extends State<MyHomePage> implements UnicoListener {

    late UnicoCheckBuilder _unicoCheck;


    @override
    void onErrorUnico(UnicoError error) {}

    @override
    void onUserClosedCameraManually() {}

    @override
    void onSystemChangedTypeCameraTimeoutFaceInference() {}

    @override
    void onSystemClosedCameraTimeoutSession() {}
    }

    ENVIRONMENT CONFIGURATION

    Configure the environment that will be used when running the SDK. Use the enumerated UnicoEnvironment, which contains the following enumerations:

    UnicoEnvironment.PROD: for the Production environment

    UnicoEnvironment.UAT: for the Testing environment

    See how to implement it in the example below:

        _unicoCheck.setEnvironment(unicoEnvironment: UnicoEnvironment.UAT)

    This implementation can be done with just some lines of code. Override the callback functions with your business rules. Each one of the callback functions is invoked as detailed below:

    onErrorUnico(UnicoError error)

    This callback function is invoked whenever an implementation error happens. For example, when informing an incorrect or inexistent capture mode while configuring the camera.

    Once invoked, this callback function receives an object of type ErrorBio containing the error details. Learn more about the type ErrorBio in the Flutter SDK references document.

    onUserClosedCameraManually()

    This callback function is invoked whenever an user manually closes the camera. For example, when clicking on the "Back" button.

    onSystemClosedCameraTimeoutSession()

    This callback function is invoked whenever the timeout session is reached (Without capturing any image).

    Timeout Session

    The timeout session can be set with the builder using the setTimeoutSession method. The timeout session setting must be in seconds.

    onSystemChangedTypeCameraTimeoutFaceInference()

    This callback function is invoked whenever the timeout for face detection is reached (without detecting any face). In this case, the capture mode is automatically changed to the manual mode (the one without the smart frame).

    Be careful

    All the above callback functions must be declared in your project (Even without any business rules). Otherwise, you won't be able to compile your project.

  2. Configure capture mode

    As explained in the section above, there are three capture modes. If you are not using the Smartlive with interaction - FaceTec mode, in this step you can choose between Manual or Automatic capture modes.

    Tip Smartlive with interaction - FaceTec

    If you are using the capture mode Smartlive with interaction - FaceTec, a standard experience is provided and can not be customized, therefore, this configuration might be irrelevant to you.

    However, it is recommended that you configure a capture mode in your builder, because if you disable the Smartlive with interaction - FaceTec mode inside your Customer Area, you do not need to change your code.

    The SDK is configured, by default, with both Smart Frame and Auto Capture enabled. To use the camera in manual mode, you have to disable both features using the methods setAutoCapture and setSmartFrame. Below you can find out how to configure the camera mode:

    Smart Camera (Automatic Capture)

    If you decide to use both default functionalities, you don't need to configure anything in this step.

    If the camera configurations were previously modified in your app, you can restore it by using the setAutoCapture and setSmartFrame methods:


    UnicoCheckCameraOpener _opener = new UnicoCheck (this)
    .setAutoCapture(autoCapture: true)
    .setSmartFrame(smartFrame: true)
    .build();

    Automatic Capture without Smart Frame.

    It is not possible to set setAutoCapture(autoCapture: true) and setSmartFrame(smartFrame: false). In other words, it is not possible to use Automatic Capture without the Smart Frame, as this component makes the intelligent framing to the image capture.

    Manual mode

    To use the manual mode, both defaultconfigurations must be set to false using the setAutoCapture and setSmartFrame methods:

    UnicoCheckCameraOpener _opener = new UnicoCheck (this)
    .setAutoCapture(autoCapture: false)
    .setSmartFrame(smartFrame: false)
    .build();
    Tip: Manual mode with Smart Frame

    You can use the Smart Frame with the manual mode. In this case, a silhouette is displayed to the users, helping them to correctly frame themselves to enable the capture button. To enable this configuration, set AutoCapture to false and Smart Camera to true.

  3. Customize the capture frame

    Optional step

    This step is optional but recommended.

    You can customize the capture frame in the SDK. To customize it, you just need to use the method corresponding to the property to be customized and apply the change with the setTheme() method.

    Learn more about the setTheme() method and the customization possibilities at the Flutter SDK reference documentation.

  4. Open the camera

    Finally, let´s open the camera! To make it easier, this last piece is splitted in some steps.

    Implementing the listeners

    Through the implementation of the listeners, you can configure what happens in your App in both error or success cases when capturing an image. To do this, use the onSuccessSelfie or onErrorSelfie methods, respectively.

    onSuccessSelfie Method
    @override
    void onSuccessSelfie(ResultCamera result) { }

    This method is invoked whenever an image is successfully captured. Once invoked, this function receives an object of type ResultCamera that is used latter to call the Rest APIs.

    onErrorSelfie Method
    @override
    void onErrorSelfie(UnicoError error) { }

    This method is invoked whenever an error happens while capturing an image. Once invoked, this callback function receives an object of type ErrorBio containing the error details. Learn more about the type UnicoError in the Flutter SDK references document.

    Listeners implementation

    The implementation of these listeners must be done inside an instance of the UnicoSelfie class.

    Opening the camera

    Then, we must open the camera using the openCameraSelfie method that receives as parameter the implementation of the class UnicoSelfie together with the SDK credentials, configured in this step.

    The following example show you how to configure the listeners and open the camera:

    _opener.openCameraSelfie(jsonFileName: androidJsonFileName, listener: this)

    A successful response would include the object ResultCamera with the following attributes:

    • base64: This attribute can be used in the case you want to display a preview of the captured image in your app;
    • encrypted: This attribute must be sent to the unico check REST APIs as detailed here);
  5. Call the REST APIs

    The image capture is just the first step of the journey. Now, you have to send the obtained JWT to the Rest APIs using one of the available flows, detailed in this page.

    Attention

    For security reasons, the interval between generating the Encrypted and sending it via one of the available flows must be a maximum of 10 minutes. Submissions made beyond this period will be automatically rejected by the API.

Getting help

Are you missing something or still need help? Please, please get in touch with the support team at help center.

Next steps