Skip to main content

Selfie Capture

About this guide

This guide is designed to assist you in the integration with the Flutter SDK in a fast and easy way. On this page, you find some key concepts, implementation examples, as well as how to interact with the Biometric Engine REST APIs.

Please note

Please note that this guide focuses on the image capture process. For more information about the REST APIs of Unico Check, please refer to REST API Reference guide.

Following this guide, you are able to:

  • Learn how to open user's camera and capture an image;
  • Learn how to link the parameters returned by the SDK with the REST APIs;
  • Learn how to deal with the data returned by the REST API;

Before you begin

The step-by-step instructions on Getting started guide to set up your account, get your API key and install the SDK must be completed. It is also recommended that you check the available features of this SDK on the Overview page.

Available resources

This SDK offers a component that allows you to capture optimized images in your app, displaying your users a silhouette that helps them to get correctly positioned for the image capture.

You can offer one of the following Selfie Capture modes in your app:

Manual Capture

The SDK displays a frame to help users to correctly position their faces. The users are then responsible for capturing the image by clicking on a button (also provided by the SDK).

The SDK does not perform any kind of validation of what is being captured. If the captured image does not have what is considered a biometrically valid face, the JWT generated by the SDK is rejected by the Biometric Engine REST API.

Captura Manual

Automatic Capture

The SDK automatically identifies the face of the users through the computer vision algorithms and helps them to position themselves correctly within the capture area. Once correctly positioned, the image is captured automatically.

Problems when sending the JWT to the biometric engine APIs are minimized as this option helps the user to frame their face in the capture area.

Captura Manual

Smartlive with interaction FaceTec

In this kind of experience, users are instructed to perform some simple movements during the image capture. Those movements are then verified by some computer vision algorithms, in order to ensure that users are really in front of the phone. By requesting users to move in front of the camera, this kind of experience adds an extra security layer against frauds.

As in the Automatic Capture mode, the image here is captured without users pressing any button. This option can also dramatically reduce problems when sending the JWT to the Biometric Engine REST API.

Smartlive with interaction FaceTec activation

This functionality must be activated inside the Unico Customer Portal, as explained in this article.

Implementation

Follow the steps below to have the full potential of the SDK embedded in your app.

  1. Initialize the SDK

    First, import the SDK and implement the interface AcessoBioManagerDelegate inside your ViewController.

    .m:
    #import "ViewController.h"
    #import <AcessoBio/AcessoBio.h>

    @interface ViewController ()

    @end

    @implementation ViewController

    - (void)viewDidLoad {
    [super viewDidLoad];

    unicoCheck = [[AcessoBioManager alloc]initWithViewController:self];
    }

    - (void)onErrorAcessoBioManager:(ErrorBio *)error {
    code
    }

    - (void)onSystemChangedTypeCameraTimeoutFaceInference {
    code
    }

    - (void)onSystemClosedCameraTimeoutSession {
    code
    }

    - (void)onUserClosedCameraManually {
    code
    }

    ENVIRONMENT CONFIGURATION

    Configure the environment that will be used when running the SDK. Use the enumerated Environment, which contains the following enumerations:

    PROD: for the Production environment

    UAT: for the Testing environment

    See how to implement it in the example below:

        [unicoCheck setEnvironment:UAT];

    This implementation can be done with just some lines of code. Override the callback functions with your business rules. Each one of the callback functions is invoked as detailed below:

    onErrorAcessoBioManager(_ error: ErrorBio!) Method

    This callback function is invoked whenever an implementation error happens. For example, when informing an incorrect or inexistent capture mode while configuring the camera.

    Once invoked, this callback function receives an object of type ErrorBio containing the error details. Learn more about the type ErrorBio in the iOS SDK references document.

    onUserClosedCameraManually() Method

    This callback function is invoked whenever an user manually closes the camera. For example, when clicking on the "Back" button.

    onSystemClosedCameraTimeoutSession() Method

    This callback function is invoked whenever the timeout session is reached (Without capturing any image).

    Timeout Session

    The timeout session can be set with the builder using the setTimeoutSession method. The timeout session setting must be in seconds.

    onSystemChangedTypeCameraTimeoutFaceInference() Method

    This callback function is invoked whenever the timeout for face detection is reached (without detecting any face). In this case, the capture mode is automatically changed to the manual mode (the one without the smart frame).

    Be careful

    All the above callback functions must be declared in your project (Even without any business rules). Otherwise, you won't be able to compile your project.

  2. Configure capture mode

    As explained in the section above, there are three capture modes. If you are not using the Smartlive with interaction - FaceTec mode, in this step you can choose between Manual or Automatic capture modes.

    Tip Smartlive with interaction - FaceTec

    If you are using the capture mode Smartlive with interaction - FaceTec, a standard experience is provided and can not be customized, therefore, this configuration might be irrelevant to you.

    However, it is recommended that you configure a capture mode in your builder, because if you disable the Smartlive with interaction - FaceTec mode inside your Customer Area, you do not need to change your code.

    The SDK is configured, by default, with both Smart Frame and Auto Capture enabled. To use the camera in manual mode, you have to disable both features using the setAutoCapture and setSmartFrame methods. Below you can find out how to configure the camera mode:

    The following examples shows you how to configure each on of the methods from a button in your UI.

    Smart Camera (Automatic Capture)

    If you decide to use both default functionalities, you don't need to configure anything in this step.

    If the camera configurations were previously modified in your app, you can restore it by using the setAutoCapture and setSmartFrame methods:

    .m:
    - (IBAction)configureSmartCamera:(UIButton *)sender {

    // Objeto unicoCheck da classe AcessoBioManager
    [unicoCheck setSmartFrame:true];
    [unicoCheck setAutoCapture:true];

    }
    Automatic Capture without Smart Frame.

    It is not possible to set setAutoCapture to true and setSmartFrame to false. In other words, it is not possible to use Automatic Capture without the Smart Frame, as this component makes the intelligent framing to the image capture.

    Manual mode

    To use the manual mode, both default configurations must be set to false using the setAutoCapture and setSmartFrame methods:

    .m:
    - (IBAction)configureSmartCamera:(UIButton *)sender {

    // Objeto unicoCheck da classe AcessoBioManager
    [unicoCheck setSmartFrame:false];
    [unicoCheck setAutoCapture:false];

    }

    Tip: Manual mode with Smart Frame

    You can use the Smart Frame with the manual mode. In this case, a silhouette is displayed to the users, helping them to correctly frame themselves to enable the capture button. To enable this configuration, set AutoCapture to false and Smart Camera to true.

  3. Implement delegates to camera events

    Through the implementation of the delegates, you can configure what happens in your App in both error or success cases when capturing an image. To do this, use the onSuccessSelfie or onErrorSelfie methods, respectively.

    To configure the delegates, you have to implement the interfaces SelfieCameraDelegate and AcessoBioSelfieDelegate:

    .h:
    #import <UIKit/UIKit.h>
    #import <AcessoBio/AcessoBio.h>
    #import "SelfieCameraDelegate.h"

    @interface ViewController : UIViewController < AcessoBioManagerDelegate,
    SelfieCameraDelegate,
    AcessoBioSelfieDelegate> {

    AcessoBioManager *unicoCheck;
    // Your code from previous and next steps here ;)
    }


    onSuccessSelfie Method

    This method is invoked whenever an image is successfully captured. Once invoked, this function receives an object of type SelfieResult that is used latter to call the Rest APIs.


    - (void)onSuccessSelfie:(SelfieResult *)result {
    NSLog(@"%@", result.base64);
    }

    A successful response includes the SelfieResult object with the following attributes:

    • base64: This attribute can be used in the case you want to display a preview of the captured image in your app;
    • encrypted: This attribute must be sent to the unico check REST APIs as detailed here);
    Converting a Base64 to Bitmap

    If you want to convert a Base64 string into a Bitmap image, the standard way won´t work in iOS. Learn more at this Stack Overflow article.

    Alert

    The Encrypted attribute is strictly intended for sending the image through the Unico APIs. You should not open and serialize this attribute, as its characteristics may change without notice. Its use must be exclusive in interactions with the APIs to guarantee the data integrity and security. Unico is not responsible for any damages arising from this practice, since the changes may occur unpredictably.

    onErrorSelfie Method

    This method is invoked whenever an error happens while capturing an image. Once invoked, this callback function receives an object of type ErrorBio containing the error details. Learn more about the type ErrorBio in the iOS SDK references document.


    - (void)onErrorSelfie:(ErrorBio *)errorBio {
    // Your code
    }

    Object ErrorBio

    Learn more about the type ErrorBio at iOS SDK references document.

  4. Customize the capture frame

    Optional step

    This step is optional but recommended.

    You can customize the capture frame in the SDK. To customize it, you just need to use the method corresponding to the property to be customized and apply the change with the setTheme() method.

    Learn more about the setTheme() method and the customization possibilities at iOS SDK reference documentation.

  5. CUSTOMIZE LANGUAGE

    You can configure the information message experience of capture frames by changing their language. Use the LocaleTypes enumeration that contains the following values:

    Comments

    1- This feature is only available for Smartlive cameras with interaction (Facetec)
    2- Remote activation may be required to be used, if you wish to use it, please activate the account CSM.
    3- If not configured, by default the iOS SDK uses Portuguese as the main language.

    LocaleTypes.PT_BR: for Portuguese (Brazil)
    LocaleTypes.ES_MX: for Spanish(Mexico)
    LocaleTypes.ES_ES: for Spanish(Spain)
    LocaleTypes.EN_US: for English(US)

    See how to implement it in the example below:

    [unicoCheck setLocale:EN_US];
  6. ENVIRONMENT CONFIGURATION

    Default configuration

    If not configured, the SDK uses the environment configured in the configuration file (getHostKey). If getHostKey is not being used, an error is returned.

    It is possible to configure the environment that will be used to run the SDK. Use the enumerated EnvironmentEnum which contains the following enumerated:

    EnvironmentEnum.PROD: for Production environment

    EnvironmentEnum.UAT: for Approval environment

    See how to implement it in the example below:

    [unicoCheck setEnvironment:PROD];
  7. For an authentication experience more associated with your brand, you can insert and customize your logo in real time.

    Add an image to your project with the following name unico_custom_footer_logo.

    Note

    The file must have the correct name. Otherwise the implementation does not work.

    When you run the project again, your logo should appear at the bottom of the screen as shown below:

    iOS SDK Logo Example
  8. Prepare and open the camera

    First, the prepareSelfieCamera method is used to receive the implementation of the class SelfieCameraListener together with the SDK credentials as parameters. The classes are configured in this step.

    .h:
    #import <UIKit/UIKit.h>
    #import <AcessoBio/AcessoBio.h>
    #import "SelfieCameraDelegate.h"

    @interface ViewController : UIViewController < AcessoBioManagerDelegate,
    SelfieCameraDelegate, AcessoBioSelfieDelegate> {

    AcessoBioManager *unicoCheck;
    }

    .m:
    - (IBAction)openCamera:(UIButton *)sender {

    // with AcessoBioConfigDataSource implementation
    [[unicoCheck build] prepareSelfieCamera:self config: [YourUnicoConfigClass new]];

    // or

    // with JSON config
    [[unicoCheck build] prepareSelfieCamera:self jsonConfigName: @""];
    }

    Once the camera is ready, the onCameraReady method is triggered receiving as parameter an object of type AcessoBioCameraOpenerDelegate. You must override this method and open the camera using the open() method that receives as parameter, the listeners configured in the step above. You must override this method and open the camera using the open() method.

    - (void)onCameraReady:(id)cameraOpener {
    [cameraOpener open:self];
    }

    - (void)onCameraFailed:(NSString *)message {
    code
    }

    In the case of an error, the event onCameraFailed is triggered. You must override this method, handling the exception, with your business rules.

  9. Call the REST APIs

    Capturing the images is just the first part of the journey. After capturing the image, it is necessary to send the generated Encrypted to the APIs, selecting one of the available flows detailed in Flows.

    Attention

    For security reasons, the interval between generating the Encrypted and sending it via one of the available flows must be a maximum of 10 minutes. Submissions made beyond this period will be automatically rejected by the API.

Getting help

Are you missing something or still need help? Please, please get in touch with the support team at help center).

Next steps