""

Integrating DataWedge’s PickList OCR into your app

Daniel Neamtu -
10 MIN READ
129
0

What’s PickList OCR?

 

DataWedge offers various features that allow users to capture data using Zebra devices' cameras or imagers. These range from standard barcode scanning to more advanced capabilities like barcode highlighting and OCR (Optical Character Recognition) readings.

PickList OCR, one of our latest features for DataWedge introduced last year, is available on all our devices running Android 11 and above. This feature allows users to easily capture barcodes or text using the camera or imager in a single workflow, eliminating the need to switch between barcode and OCR functions. The system recognizes alphanumeric words based on OCR rules and lets users adjust OCR confidence levels.

To display the scanned output data, you can use either Intent Output or Keystroke methods. If you're using Keystroke output, we recommend sending a pause command (approximately 200 ms) using the Advanced Data Formatting rules before transmitting the scanned data.

The feature can also be used under 3 different ways which are:

  • OCR or Barcode (default)
  • OCR Only
  • Barcode Only

Based on these modes, we can assign different sets of rules, which can be summarized as follows:

Report OCR Data Rules - Which specifies conditions related to the captured and decoded words

  • Conditions
    • Identifier
      • Min Length - Minimum length of the word to be returned
      • Max Length - Maximum length of the word to be returned
      • Starts With - Specifies the characters with which the word must begin with
      • Contains - The word must contain the specified characters
      • Ignore Case Sensitivity - Specifies if the captured word should be case-sensitive

Report Barcode Data Rules - Which specifies conditions related to the decoded barcodes

  • Conditions
    • Identifier
      • Min Length - Minimum length of the barcode to be returned
      • Max Length - Maximum length of the barcode to be returned
      • Starts With - Specifies the characters with which the barcode must begin with
      • Contains - Specifies if the barcode must contain the specified characters
      • Ignore Case Sensitivity - Specifies if the content of the captured barcode should be case-sensitive
    • Symbology - Specifies the allowed symbologies to be scanned during the workflow

There are many additional parameters you can customize to fine-tune this feature to your liking and you can find these in our official documentation on TechDocs.

 

Creating the DataWedge Profile

 

Assuming you have your Android project initialized in Android Studio, let’s first of all these permissions inside the Android Manifest:

 

<uses-permission android:name="com.symbol.datawedge.permission.contentprovider"/>

<queries>
    <package android:name="com.symbol.datawedge" />
</queries>

 

These are essential. We'll need them for later to perform queries from the URIs we receive to reconstruct the captured image during OCR operations.

If you're new to DataWedge or unfamiliar with the concept of profiles and how DataWedge works, please familiarize yourself with the following resources before proceeding:

fun generateDWBaseProfile() {
    val bMain = Bundle().apply {
        putString("PROFILE_NAME", "PickList OCR Demo")
        putString("PROFILE_ENABLED", "true")
        putString("CONFIG_MODE", "CREATE_IF_NOT_EXIST")
        putString("RESET_CONFIG", "true")
    }

    val configApplicationList = Bundle().apply {
        putString("PACKAGE_NAME", packageName)
        putStringArray("ACTIVITY_LIST", arrayOf("*"))
    }

    val intentModuleParamList = Bundle().apply {
        putString("intent_output_enabled", "true")
        putString("intent_action", "com.zebra.nilac.dwpicklistocrdemo.SCANNER")
        putInt("intent_delivery", 2)
    }

    val intentModule = Bundle().apply {
        putString("PLUGIN_NAME", "INTENT")
        putString("RESET_CONFIG", "true")
        putBundle("PARAM_LIST", intentModuleParamList)
    }

    val keystrokeModuleParamList = Bundle().apply {
        putString("keystroke_output_enabled", "false")
    }

    val keystrokeModule = Bundle().apply {
        putString("PLUGIN_NAME", "KEYSTROKE")
        putString("RESET_CONFIG", "true")
        putBundle("PARAM_LIST", keystrokeModuleParamList)
    }

    bMain.putParcelableArrayList(
        "PLUGIN_CONFIG", arrayListOf(
            intentModule,
            keystrokeModule,
            enablePickListOCR()
        )
    )
    bMain.putParcelableArray("APP_LIST", arrayOf(configApplicationList))

    sendBroadcast(Intent().apply {
        action = "com.symbol.datawedge.api.ACTION"
        setPackage("com.symbol.datawedge")
        putExtra("com.symbol.datawedge.api.SET_CONFIG", bMain)
        putExtra("SEND_RESULT", "COMPLETE_RESULT")
        putExtra("COMMAND_IDENTIFIER", "CREATE_PROFILE")
    })
}

 

Let's briefly explain what's happening in this code snippet:

  • We're creating a new DataWedge profile, but only if it doesn't already exist.
  • We specify the package name and target activities. If the profile should apply to all activities, we use *.
  • We enable Intent output to receive PickList OCR results through a Broadcast Receiver that we'll register in the code.
  • We disable KeyStroke output as it's not needed for this implementation.
  • We assign a Command Identifier to the Intent, allowing us to trace the full result of the operation from DataWedge.

 

Enabling PickList OCR & associating rules

 

You probably have noticed the enablePicListOCR() method while creating the profile, so let’s take a look on how the logic looks like:

 

private fun enablePickListOCR(): Bundle {
		val bPickListOcr = Bundle().apply {
		    putString("module", "MlKitExModule")
		    putBundle("module_params", Bundle().apply {
		        putString("session_timeout", "3000") //Integer Range  0 – 60000
		        putString("illumination", "off") //on - off
		        putString("output_image", "2") // 0 - Disabled, 2 - Cropped Image
		        putString("script","0") // Language Script
		        putString("confidence_level", "70") // Integer range 0-100
		        putString("text_structure", "0") // 0 - Single Word, 1- Single Line
		        putString("picklist_mode","0") // 0 - OCR or Barcode, 1 - OCR Only, 2 - Barcode Only
	
	        putParcelableArrayList("rules",
	            arrayListOf(
	                Bundle().apply {
	                    putParcelableArrayList("rule_list", createOCRRules())
	                    putString("rule_param_id", "report_ocr_data")
	                }
	            )
	        )
	    })
	}
	
	val bPickListBarcode = Bundle().apply {
	    putString("module", "BarcodeDecoderModule")
	    putBundle("module_params", Bundle().apply {
	        putParcelableArrayList("rules",
	            arrayListOf(
	                Bundle().apply {
	                    putParcelableArrayList("rule_list", createBarcodeRules())
	                    putString("rule_param_id", "report_barcode_data")
	                }
	            )
	        )
	    })
	}
	
	val bConfigWorkflowParamList = Bundle().apply {
	    putString("workflow_name", "picklist_ocr")
	    putString("workflow_input_source", "2")
	    putParcelableArrayList("workflow_params", arrayListOf(bPickListOcr, bPickListBarcode))
	}
	
	val bConfigWorkflow = Bundle().apply {
	    putString("PLUGIN_NAME", "WORKFLOW")
	    putString("RESET_CONFIG", "true")
	
	    putString("workflow_input_enabled", "true")
	    putString("selected_workflow_name", "picklist_ocr")
	    putString("workflow_input_source", "2") //1 - Imager 2 - Camera
	
	    putParcelableArrayList("PARAM_LIST", arrayListOf(bConfigWorkflowParamList))
	}
	return bConfigWorkflow
}

 

Let's break down what we're doing here:

  • We specify the PickList OCR as our Workflow Input feature. (Note: DataWedge offers multiple "Workflow Input" features, which you can explore here.)
  • We configure the main parameters for PickList OCR under the MlKitExModule. For additional parameters, check here.
  • Within the MlKitExModule Bundle, we include another Bundle containing an ArrayList of Bundles. This will hold our OCR rules, which we'll define later.
  • We apply a similar approach for barcode scanning rules. However, in this case, we create a new Bundle referencing the BarcodeDecoderModule.

     

private fun createBarcodeRules(): ArrayList<Bundle> {
    val ean8Rule = Bundle().apply {
        putString("rule_name", "EAN8")
        putBundle("criteria", Bundle().apply {
            putParcelableArrayList(
                "identifier", arrayListOf(
                    Bundle().apply {
                        putString("criteria_key", "starts_with")
                        putString("criteria_value", "58")
                    }
                ))
            putStringArray("symbology", arrayOf("decoder_ean8"))
        })
        putParcelableArrayList("actions", arrayListOf(
            Bundle().apply {
                putString("action_key", "report")
                putString("action_value", "")
            }
        ))
    }
    return arrayListOf(ean8Rule)
}

 

For the barcode rules, we define only one in this case. We specify that the user is allowed to scan only EAN8 barcodes, disabling all other symbologies. Additionally, we require that the barcode starts with the prefix "58".

Each rule consists of two blocks. The first is the criteria, which is an ArrayList of identifiers for the actual rules we want to assign. The second is an ArrayList referencing the actions we can take if one of those criteria is met. In this case, the only action available is report.

 

private fun createOCRRules(): ArrayList<Bundle> {
    val testOcrRule = Bundle().apply {
        putString("rule_name", "TestOCR")
        putBundle("criteria", Bundle().apply {
            putParcelableArrayList(
                "identifier", arrayListOf(
                    Bundle().apply {
                        putString("criteria_key", "min_length")
                        putString("criteria_value", "3")
                    },
                    Bundle().apply {
                        putString("criteria_key", "max_length")
                        putString("criteria_value", "7")
                    },
                    Bundle().apply {
                        putString("criteria_key", "starts_with")
                        putString("criteria_value", "A")
                    },

                    Bundle().apply {
                        putString("criteria_key", "contains")
                        putString("criteria_value", "BA")
                    },

                    Bundle().apply {
                        putString("criteria_key", "ignore_case")
                        putString("criteria_value", "true")
                    })
            )
        })
        putParcelableArrayList("actions", arrayListOf(
            Bundle().apply {
                putString("action_key", "report")
                putString("action_value", "")
            }
        ))
    }
    return arrayListOf(testOcrRule)
}

 

Lastly, we define the rule for OCR operations.

  • This rule is more complex, using multiple criteria parameters:
  • The word must be 3-7 characters long
  • It must start with the letter "A"
  • It must contain the sequence "BA"
  • Case sensitivity is ignored (the "ignore case" parameter is set to true)

Now that you understand how to create new rules for PickList OCR, you're ready to complete the logic for creating the profile and sending it to DataWedge via a Broadcast.

 

Parsing DataWedge Responses

 

Profile Creation

 

When defining the intent to send to DataWedge, we specified a Command Identifier. This identifier allows us to intercept DataWedge's confirmation of the operation, detailing whether everything went smoothly or if there were any issues. If you've followed my code snippets correctly, you shouldn't encounter any problems, and the profile should be created successfully.

To ensure you always receive a confirmation result from DataWedge about a past operation, I recommend always specifying a Command Identifier in the Intent you're sending to DataWedge. This way, you can be 100% certain that your application correctly handles scenarios where DataWedge is unavailable or simply not able to process your request properly:

 

putExtra("SEND_RESULT", "COMPLETE_RESULT")
putExtra("COMMAND_IDENTIFIER", "YourCommandIdentifier")

 

We'll now use this BroadcastReceiver to parse and verify if DataWedge correctly handled each module we specified during profile creation. Each one of the modules could have a different result code, hence this is why we’re doing these thorough checks. Feel free to register this receiver in your code and adapt it as needed:

 

private val dwReceiver: BroadcastReceiver = object : BroadcastReceiver() {
    override fun onReceive(context: Context, intent: Intent) {
        val action = intent.action
        val extras = intent.extras
        var resultInfo = ""

        if (extras != null && intent.hasExtra("RESULT_LIST")) {
            if (extras.getString("COMMAND_IDENTIFIER")
                    .equals("CREATE_PROFILE")
            ) {
                val resultList: ArrayList<Bundle> =
                    extras.get("RESULT_LIST") as ArrayList<Bundle>

                if (resultList.size > 0) {
                    var allSuccess = true

                    // Iterate through the result list for each module
                    for (result in resultList) {
                        val module = result.getString("MODULE")
                        val resultCode = result.getString("RESULT_CODE")
                        val subResultCode = result.getString("SUB_RESULT_CODE")

                        if (result.getString("RESULT").equals("FAILURE")
                            && !module.equals("APP_LIST")
                        ) {
                            // Profile creation failed for the module.
                            // Getting more information on what failed
                            allSuccess = false

                            resultInfo = "Module: $module\\n" // Name of the module that failed
                            resultInfo += "Result code: $resultCode\\n" // Information on the type of the failure
                            if (!subResultCode.isNullOrEmpty()) // More Information on the failure if exists
                                resultInfo += "\\tSub Result code: $subResultCode\\n"
                            break
                        } else {
                            // Profile creation success for the module.
                            resultInfo = "Module: " + result.getString("MODULE") + "\\n"
                            resultInfo += "Result: " + result.getString("RESULT") + "\\n"
                        }
                    }
                    if (allSuccess) {
                        Log.d(TAG, "Profile created successfully")
                    } else {
                        Log.e(TAG, "Profile creation failed!\\n\\n$resultInfo")
                    }
                }
            }
        }
    }
 }

 

PickList OCR

With the profile successfully created and PickList OCR enabled, you can now use one of the device's side scan trigger buttons to initiate the scanning process. Depending on your workflow setup, either the camera viewfinder will appear or the imager will start illuminating.

Remember that when scanning with the imager or camera, you'll need to press the trigger button twice: once to initiate scanning and again to confirm your selection. If you don't receive any feedback after pressing the button, it means your scan doesn't match any of the criteria specified in your rules.

After a successful scan, DataWedge returns the result for us to parse. Using the BroadcastReceiver we declared earlier, we'll now add this code snippet to extract the captured word or barcode:

 

//....

} else if (extras != null &&
		action.equals("com.zebra.nilac.dwpicklistocrdemo.SCANNER", ignoreCase = true)
) {
  val jsonData: String = extras.getString("com.symbol.datawedge.data")!!
  
	val jsonArray = JSONArray(json)
	val jsonObject = jsonArray.getJSONObject(0)
	
	val uri = if (jsonObject.has("uri")) {
	    jsonObject.getString("uri")
	} else {
	    ""
	}
	
	if (uri.isEmpty() || jsonArray.length() == 1) {
		val stringData = jsonObject.get("string_data").toString()
	}
}

 

The returned object will be a stringified JSONArray. Since we can only scan one word or barcode at a time, the array will always contain either one or two JSONObject entries. The second entry, if present, may contain information about the captured image.

Remember that the cropped image will only be returned if these conditions are met:

  • It is specified during the configuration of the MlKitExModule. By default, it will always be enabled.
  • An OCR operation is performed. An image will never be returned if the user scans a barcode.

To determine if we should extract the cropped image from a possible OCR operation, we can perform these checks:

  • Verify if the JSONArray's length is 1 or 2. If it's 1, there's no image involved in the operation.
  • Check for a URI in the first JSONObject of the array.
  • If neither of these conditions is met, then simply extract the word or barcode value by looking for the string_data key inside the JSONObject.

 

Extracting captured image

 

Finally, if we want to extract the captured image from an OCR operation, we can build upon the previous code. When the JSONArray has two entries, the final code would look like this with the added image extraction part:

 

//Extract image from provided URI
val baos = ByteArrayOutputStream()
var nextURI: String? = uri

val contentResolver: ContentResolver = application.contentResolver

// Loop to collect all the data from the URIs
while (!nextURI.isNullOrEmpty()) {
    val cursor = contentResolver.query(Uri.parse(nextURI), null, null, null, null)
    cursor?.use {
        nextURI = if (it.moveToFirst()) {
            val rawData = it.getBlob(it.getColumnIndex("raw_data"))
            baos.write(rawData)
            it.getString(it.getColumnIndex("next_data_uri"))
        } else {
            null
        }
    }
}

// Extract image data from the JSON object
val width = jsonObject.getInt("width")
val height = jsonObject.getInt("height")
val stride = jsonObject.getInt("stride")
val orientation = jsonObject.getInt("orientation")
val imageFormat = jsonObject.getString("imageformat")

// Decode the image
val bitmap: Bitmap = ImageProcessing.getInstance().getBitmap(
    baos.toByteArray(), imageFormat, orientation, stride, width, height
)

 

Remember that we mentioned at the beginning the need to include specific permissions for DataWedge in the Android Manifest? Well, those permissions are crucial for this particular purpose. So here's a breakdown of what's happening:

  • We initialize a ByteArrayOutputStream to store the raw image data.
  • We use a while loop to iterate through potentially multiple URIs containing image data:
  • For each URI, we query it using the ContentResolver.
  • We extract the raw data from the cursor and write it to our ByteArrayOutputStream.
  • We get the next URI (if any) to retrieve the remaining image data
  • After collecting all the raw data, we extract the additional image information from the JSONObject (width, height, stride, orientation, and image format).
  • Finally, we use the ImageProcessing class which you can find it documented here to construct a Bitmap from the collected data.

 

Conclusions

 

If you have any questions about the integration process, feel free to ask in the comments. You can also find the full sample app here under our ZebraDevs Organization.

Happy coding!

profile

Daniel Neamtu

Please Register or Login to post a reply

0 Replies