Edge Computing Playground

Edge Computing Playground

Edge Computing Playground User Experience Platform is designed to provide industry users with an "out-of-the-box" integrated software and hardware Edge Computing experience. Whether for individual developers or enterprise innovation teams, Edge Computing Playground offers full-chain support, enabling complex Edge Computing technology to be efficiently transformed into practical implementation paths, accelerating Edge Computing innovation and productization.

Edge Computing Playground Introduction

This section mainly introduces the integration development guide for Edge Computing Playground User Experience Platform for Android-side algorithm SDK, aimed at helping developers quickly understand and deploy various Edge Computing algorithms.

Integrated Algorithm Services

Algorithm Type Function Description Application Scenarios
Face Recognition Face detection, keypoint localization, feature extraction and comparison Access control attendance, identity verification
Mask Detection Detect mask wearing status (correct/incorrect/not worn) Public place safety monitoring
Gesture Recognition Recognize common gesture actions Human-computer interaction, smart home
QR Code Recognition Quickly scan and decode QR codes/barcodes Payment, information retrieval
OCR Text Recognition Text extraction from images and camera streams ID card recognition, receipt processing

Pre-deployment Preparation

Development Environment Requirements

Item Requirements
IDE Android Studio Arctic Fox (2020.3.1) or higher
Android SDK compileSdk 32
Android NDK 21.4.7075529
CMake 3.22.1
Gradle 7.0+
JDK 1.8
Target Device ARM64 architecture (arm64-v8a)
Minimum System Version Android 5.0 (API 21)
Target System Version Android 11 (API 30)

Note: The device needs to be connected to adb tools first, execute adb root, then execute adb shell setenforce 0

Required Software Installation

Android Studio Configuration

1. Install NDK and CMake

Open Android Studio → Settings → Appearance & Behavior → System Settings → Android SDK

  • Switch to SDK Tools tab

Check and install the following components:

  • NDK (Side by side) → Select version 21.4.7075529
  • CMake → Select version 3.22.1
  • Android SDK Build-Tools
  • Android SDK Platform-Tools

2. Configure Environment Variables (Windows System)

ANDROID_NDK_HOME = C:\Users\[Username]\AppData\Local\Android\Sdk\ndk\21.4.7075529

Gradle Configuration

Project root directory build.gradle:

plugins {
    alias(libs.plugins.android.application) apply false
    alias(libs.plugins.kotlin.android) apply false
}

Project Dependencies

Java/Kotlin Dependencies

Native Libraries (.so files)

The project depends on the following pre-compiled Native libraries (contact us on official website), which need to be placed in the app/src/main/jniLibs/arm64-v8a/ directory:

Library Name Description
libedge_computingmarket.so Main algorithm library (project build output)
libinference.so Inference engine
libopencv_java4.so OpenCV image processing library
libfacerecognition.so Face recognition algorithm library
libmaskdet.so Mask detection algorithm library
libpipeline.so Gesture recognition pipeline library
libdeepocr.so OCR text recognition library
libzbar.so QR code recognition library
libzbar_ex.so QR code extension library
libjsoncpp.so JSON parsing library
libc++_shared.so C++ runtime library

Model File Deployment

Model files need to be deployed to the specified directory on the device, project default path structure:

/data/user/0/com.quectel.edgecomputingmarket/files/model/
├── face        # Sub-module name
│   └── sim.engine    # Sub-module model file
├── hand
│   └── sim.engine
├── person
│   └── sim.engine
└── reid
    └── sim.engine

Model Core Information

Model Algorithm List

No. Algorithm Name Model Type Input Size Hardware Acceleration Description
1 Face Recognition Recognition Model 256×256 DSP Recognize faces in images
2 Mask Detection Detection Model 640×640 DSP Three categories: correct/incorrect/not worn
3 Gesture Detection Detection Model 640×640 DSP Detect gesture regions
4 OCR Text Recognition Recognition Model - DSP Text content recognition
5 QR Code Recognition Decoding Algorithm - CPU Supports QR Code and other code formats

Detailed Algorithm Descriptions

Face Recognition

Functional Modules:

  • Face Detection: Locate face positions in images/video streams, output bounding box coordinates
  • Keypoint Localization: Extract 5 facial keypoints (left eye, right eye, nose tip, left mouth corner, right mouth corner)
  • Feature Extraction: Generate 128-dimensional face feature vectors
  • Face Comparison: Calculate face similarity through cosine similarity

Technical Parameters:

Parameter Value
Detection confidence threshold 0.5
NMS threshold 0.5
Minimum face area ratio 1% (image area)
Feature dimension 128-dimensional
Similarity threshold 0.6
Mask Detection

Detection Categories:

Category ID Category Name Description
0 Correctly_Worn Correctly wearing mask
1 Not_Worn Not wearing mask
2 Incorrectly_Worn Incorrectly wearing mask (covering mouth only/nose only, etc.)

Technical Parameters:

Parameter Default Value
Confidence threshold 0.25
NMS threshold 0.45
Minimum detection box area 1% (image area)
Gesture Recognition

Supported Gesture Types:

No. Gesture Name Description
1 like Thumbs up gesture
2 dislike Thumbs down gesture
3 call Phone call gesture
4 fist Fist gesture
5 four Four fingers gesture
6 mute Mute gesture
7 ok OK gesture
8 one Index finger gesture
9 palm Open palm
10 other Other gestures
11 three Three fingers gesture
12 yeah Victory/V gesture

Pipeline Process: Detection → Cropping → Classification

OCR Text Recognition

Functional Features:

  • Supports multi-line text detection
  • Automatically recognizes mixed Chinese-English text
  • Returns text content and position information

Technical Parameters:

Parameter Value
Confidence threshold 0.2
Input size 256×256
Supported languages Chinese, English
QR Code Recognition

Supported Code Formats:

  • QR Code
  • EAN-13 (configurable to disable)
  • Other ZBar-supported code formats

Configuration Parameters:

g_qrScanner->set_config(ZBAR_QRCODE, ZBAR_CFG_X_DENSITY, 4);
g_qrScanner->set_config(ZBAR_QRCODE, ZBAR_CFG_Y_DENSITY, 4);

Model Migration Process (Face Recognition Example)

This section takes face recognition as an example to introduce in detail how to integrate algorithm models from scratch in Android Studio.

Project Structure Creation

app/src/main/
├── cpp/
│   ├── CMakeLists.txt           # CMake configuration file
│   ├── edge_computing_market.cpp             # JNI interface implementation
│   └── include/                 # Header file directory
│       ├── faceRecogInterfaceEX.hpp
│       ├── SNPEClass.hpp
│       └── ...
├── java/com/quectel/edgecomputingmarket/
│   └── manager/
│       └── JniManager.java      # JNI management class
├── jniLibs/arm64-v8a/           # Native library directory
│   ├── libSNPE.so
│   ├── libopencv_java4.so
│   └── ...
└── assets/models/
    └── face_model/
        └── sim.engine

CMake Configuration

Create app/src/main/cpp/CMakeLists.txt:

cmake_minimum_required(VERSION 3.22.1)
project("edge_computing_market" LANGUAGES CXX)

# ========== Path Configuration ==========
set(THIRD_PARTY_SO_ROOT "${CMAKE_SOURCE_DIR}/../jniLibs")
set(INCLUDE_ROOT "${CMAKE_SOURCE_DIR}/include")
set(LOCAL_SRC_DIR "${CMAKE_SOURCE_DIR}")
set(SUPPORTED_ABI "arm64-v8a")
set(THIRD_PARTY_SO_DIR "${THIRD_PARTY_SO_ROOT}/${SUPPORTED_ABI}")

# ========== Import Pre-compiled Libraries ==========

# OpenCV library
set(OPENCV_LIB_NAME "opencv_java4")
add_library(${OPENCV_LIB_NAME} SHARED IMPORTED)
set_target_properties(${OPENCV_LIB_NAME} PROPERTIES
    IMPORTED_LOCATION "${THIRD_PARTY_SO_DIR}/lib${OPENCV_LIB_NAME}.so"
)

# SNPE inference engine
set(SNPE_LIB_NAME "SNPE")
add_library(${SNPE_LIB_NAME} SHARED IMPORTED)
set_target_properties(${SNPE_LIB_NAME} PROPERTIES
    IMPORTED_LOCATION "${THIRD_PARTY_SO_DIR}/lib${SNPE_LIB_NAME}.so"
)

# C++ runtime
set(C_SHARED_LIB_NAME "c++_shared")
add_library(${C_SHARED_LIB_NAME} SHARED IMPORTED)
set_target_properties(${C_SHARED_LIB_NAME} PROPERTIES
    IMPORTED_LOCATION "${THIRD_PARTY_SO_DIR}/lib${C_SHARED_LIB_NAME}.so"
)

# Face recognition library
set(FACE_LIB_NAME "facerecognition")
add_library(${FACE_LIB_NAME} SHARED IMPORTED)
set_target_properties(${FACE_LIB_NAME} PROPERTIES
    IMPORTED_LOCATION "${THIRD_PARTY_SO_DIR}/lib${FACE_LIB_NAME}.so"
)

# ========== Header File Configuration ==========
function(recursive_include dir)
    if(IS_DIRECTORY ${dir})
        include_directories(${dir})
        file(GLOB SUB_DIRS RELATIVE ${dir} "${dir}/*")
        foreach(sub_dir ${SUB_DIRS})
            set(full_sub_dir "${dir}/${sub_dir}")
            if(IS_DIRECTORY ${full_sub_dir})
                recursive_include(${full_sub_dir})
            endif()
        endforeach()
    endif()
endfunction()
recursive_include(${INCLUDE_ROOT})

# ========== Build edge_computing_market.so ==========
file(GLOB LOCAL_SRC_FILES
    "${LOCAL_SRC_DIR}/edge_computing_market.cpp"
)

find_library(LOG_LIB log REQUIRED)
find_library(JNI_GRAPHICS_LIB jnigraphics REQUIRED)

add_library(edge_computing_market SHARED ${LOCAL_SRC_FILES})

target_compile_features(edge_computing_market PRIVATE cxx_std_17)
target_compile_options(edge_computing_market PRIVATE
    -frtti
    -fexceptions
    -Wno-error=format-security
    -fPIC
)

target_link_libraries(edge_computing_market PRIVATE
    ${C_SHARED_LIB_NAME}
    ${FACE_LIB_NAME}
    ${OPENCV_LIB_NAME}
    ${SNPE_LIB_NAME}
    ${JNI_GRAPHICS_LIB}
    ${LOG_LIB}
    c m dl atomic
)

JNI Interface Layer Implementation

Java Layer Interface Definition

Create JniManager.java:

package com.quectel.edgecomputingmarket.manager;

public class JniManager {
    private static final String TAG = "JniManager";
    private static final String SO_NAME = "edge_computing_market";
    private static volatile JniManager INSTANCE;
    private static boolean isSoLoaded = false;

    // Load Native library
    static {
        try {
            System.loadLibrary(SO_NAME);
            isSoLoaded = true;
        } catch (UnsatisfiedLinkError e) {
            isSoLoaded = false;
        }
    }

    // Singleton pattern
    public static JniManager getInstance() {
        if (INSTANCE == null) {
            synchronized (JniManager.class) {
                if (INSTANCE == null) {
                    INSTANCE = new JniManager();
                }
            }
        }
        return INSTANCE;
    }

    // ========== Face Recognition Interface ==========

    /**
     * Set SNPE environment variable
     * @param soPath Native library path
     */
    public native void setAdspEnv(String soPath);

    /**
     * Initialize face recognition model
     * @param faceDetModelPath Face detection model path
     * @param faceLandmarkModelPath Keypoint model path
     * @param faceRegModelPath Feature extraction model path
     * @param faceDbPath Face database path
     * @return 0-success, negative value-error code
     */
    public native int initFaceRecognition(
        String faceDetModelPath,
        String faceLandmarkModelPath,
        String faceRegModelPath,
        String faceDbPath
    );

    /**
     * Face recognition from image
     * @param imagePath Input image path
     * @param saveDir Result image save directory
     * @return Result image path
     */
    public native String faceRecognitionFromImage(String imagePath, String saveDir);

    /**
     * Real-time face recognition from camera stream
     * @param mergedYuvData YUV data
     * @param width Image width
     * @param height Image height
     * @param rotation Rotation angle
     * @return Detection result array [face count, x, y, w, h, id, score, keypoint count, keypoint coordinates...]
     */
    public native float[] faceRecognitionFromCameraX(
        byte[] mergedYuvData,
        int width,
        int height,
        int rotation
    );

    /**
     * Release face recognition resources
     */
    public native void releaseCameraFaceRecog();
}
C++ Layer JNI Implementation

Create edge_computing_market.cpp (core code snippet):

#include <jni.h>
#include <string>
#include <vector>
#include <android/log.h>
#include "faceRecogInterfaceEX.hpp"
#include "opencv2/opencv.hpp"

#define LOG_TAG "JNI_edge_computing_market"
#define LOGD(...) __android_log_print(ANDROID_LOG_DEBUG, LOG_TAG, __VA_ARGS__)
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS__)

// Global face recognition instance
static std::unique_ptr<FaceRecogInterFaceEX::FaceRecogIFEX> g_frIfEx = nullptr;
static std::vector<std::vector<float>> g_faceFeatures;
static int g_validFeatureDim = 0;

void SetAdspLibraryPath(const std::string &nativeLibPath) {
    std::stringstream spath;
    std::stringstream spath1;
    spath << nativeLibPath << ";/vendor/lib/rfsa/adsp;/vendor/dsp;/vendor/dsp/cdsp";
    spath1 << nativeLibPath;
    LOGD("nativeLibPath is %s", nativeLibPath.c_str());
    if (setenv("ADSP_LIBRARY_PATH", spath.str().c_str(), 1 /*override*/) == 0) {
        LOGD("SNPE environment configured successfully");
    } else {
        LOGD("SNPE environment configuration failed");
    }
    if (setenv("LD_LIBRARY_PATH", spath1.str().c_str(), 1 /*override*/) == 0) {
        LOGD("SNPE environment configured successfully");
    } else {
        LOGD("SNPE environment configuration failed");
    }
    LOGD("ADSP_LIBRARY_PATH is %s", getenv("ADSP_LIBRARY_PATH"));
    LOGD("LD_LIBRARY_PATH is %s", getenv("LD_LIBRARY_PATH"));
}

extern "C" JNIEXPORT void JNICALL
Java_com_quectel_edgecomputingmarket_manager_JniManager_setAdspEnv(
    JNIEnv *env,
    jobject thiz,
    jstring soLibraryPath
) {
    const char *ldPath = env->GetStringUTFChars(soLibraryPath, nullptr);

    if (ldPath == nullptr) {
        LOGE("Environment variable path is empty!");
        goto end;
    }

    SetAdspLibraryPath(ldPath);

end:
    env->ReleaseStringUTFChars(soLibraryPath, ldPath);
}

/**
 * Initialize face recognition model
 */
extern "C"
JNIEXPORT jint JNICALL
Java_com_quectel_edgecomputingmarket_manager_JniManager_initFaceRecognition(
    JNIEnv *env, jobject thiz,
    jstring face_det_model_path,
    jstring face_landmark_model_path,
    jstring face_reg_model_path,
    jstring face_db_path) {

    try {
        if (g_frIfEx != nullptr) {
            LOGD("Face recognition already initialized");
            return 0;
        }

        // Get model paths
        char *detModel = const_cast<char *>(env->GetStringUTFChars(face_det_model_path, nullptr));
        char *landmarkModel = const_cast<char *>(env->GetStringUTFChars(face_landmark_model_path, nullptr));
        char *regModel = const_cast<char *>(env->GetStringUTFChars(face_reg_model_path, nullptr));
        char *dbPath = const_cast<char *>(env->GetStringUTFChars(face_db_path, nullptr));

        // Create face recognition instance
        g_frIfEx = std::make_unique<FaceRecogInterFaceEX::FaceRecogIFEX>(
            detModel, landmarkModel, regModel);

        int err = g_frIfEx->init();
        if (err != 0) {
            g_frIfEx.reset();
            return err;
        }

        // Load face database
        std::vector<std::string> filePaths;
        glob(dbPath, filePaths);
        g_faceFeatures.clear();

        for (const auto &path: filePaths) {
            cv::Mat img = cv::imread(path);
            if (img.empty()) continue;

            std::vector<FaceRecogInterFaceEX::FaceResults_t> faceResults;
            err = g_frIfEx->detect_face(img, faceResults, 0.5, 0.5);

            if (err == 1 && !faceResults.empty() && !faceResults[0].feature.empty()) {
                g_faceFeatures.emplace_back(faceResults[0].feature);
                g_validFeatureDim = faceResults[0].feature.size();
            }
        }

        // Release string resources
        env->ReleaseStringUTFChars(face_det_model_path, detModel);
        env->ReleaseStringUTFChars(face_landmark_model_path, landmarkModel);
        env->ReleaseStringUTFChars(face_reg_model_path, regModel);
        env->ReleaseStringUTFChars(face_db_path, dbPath);

        return g_faceFeatures.empty() ? -2 : 0;
    } catch (...) {
        return -5;
    }
}

/**
 * Camera stream face recognition
 */
extern "C"
JNIEXPORT jfloatArray JNICALL
Java_com_quectel_edgecomputingmarket_manager_JniManager_faceRecognitionFromCameraX(
    JNIEnv *env, jobject thiz,
    jbyteArray mergedYuvData_,
    jint width, jint height, jint rotation) {

    if (g_frIfEx == nullptr || g_faceFeatures.empty()) {
        jfloatArray emptyArray = env->NewFloatArray(1);
        float emptyVal = 0.0f;
        env->SetFloatArrayRegion(emptyArray, 0, 1, &emptyVal);
        return emptyArray;
    }

    // Get YUV data
    jbyte *mergedYuvData = env->GetByteArrayElements(mergedYuvData_, nullptr);

    // YUV to BGR
    cv::Mat yuvMat(height * 3 / 2, width, CV_8UC1, (unsigned char *) mergedYuvData);
    cv::Mat bgrMat;
    cv::cvtColor(yuvMat, bgrMat, cv::COLOR_YUV2BGR_I420);

    // Image rotation
    cv::Mat rotatedMat;
    switch (rotation) {
        case 90: cv::rotate(bgrMat, rotatedMat, cv::ROTATE_90_CLOCKWISE); break;
        case 180: cv::rotate(bgrMat, rotatedMat, cv::ROTATE_180); break;
        case 270: cv::rotate(bgrMat, rotatedMat, cv::ROTATE_90_COUNTERCLOCKWISE); break;
        default: rotatedMat = bgrMat.clone(); break;
    }

    // Face detection
    std::vector<FaceRecogInterFaceEX::FaceResults_t> faceResults;
    g_frIfEx->detect_face(rotatedMat, faceResults, 0.5f, 0.5f);

    // Feature matching
    for (auto &face: faceResults) {
        if (!face.feature.empty() && face.feature.size() == g_validFeatureDim) {
            std::pair<int, float> matchRes = g_frIfEx->cosine_similarity(
                g_faceFeatures, face.feature, 0.6f);
            face.code = matchRes.first;
        }
    }

    // Build return result
    int resultSize = 1;
    for (const auto &face: faceResults) {
        resultSize += 7 + face.key_pts.size() * 2;
    }

    jfloatArray resultArray = env->NewFloatArray(resultSize);
    std::unique_ptr<float[]> resultData(new float[resultSize]);
    int idx = 0;
    resultData[idx++] = static_cast<float>(faceResults.size());

    for (const auto &face: faceResults) {
        resultData[idx++] = static_cast<float>(face.x);
        resultData[idx++] = static_cast<float>(face.y);
        resultData[idx++] = static_cast<float>(face.width);
        resultData[idx++] = static_cast<float>(face.height);
        resultData[idx++] = static_cast<float>(face.code);
        resultData[idx++] = 0.0f; // score
        resultData[idx++] = static_cast<float>(face.key_pts.size());
        for (const auto &pt: face.key_pts) {
            resultData[idx++] = static_cast<float>(pt.x);
            resultData[idx++] = static_cast<float>(pt.y);
        }
    }

    env->SetFloatArrayRegion(resultArray, 0, resultSize, resultData.get());
    env->ReleaseByteArrayElements(mergedYuvData_, mergedYuvData, JNI_ABORT);
    return resultArray;
}

/**
 * Release resources
 */
extern "C"
JNIEXPORT void JNICALL
Java_com_quectel_edgecomputingmarket_manager_JniManager_releaseCameraFaceRecog(
    JNIEnv *env, jobject thiz) {
    g_frIfEx.reset();
    g_faceFeatures.clear();
    g_validFeatureDim = 0;
}

build.gradle Configuration

Add the following in app/build.gradle:

android {
    // ... other configurations ...

    defaultConfig {
        // ... other configurations ...

        externalNativeBuild {
            cmake {
                cppFlags ""
                abiFilters 'arm64-v8a'
            }
        }
        ndk {
            abiFilters 'arm64-v8a'
        }
    }

    externalNativeBuild {
        cmake {
            path "src/main/cpp/CMakeLists.txt"
            version "3.22.1"
        }
    }
    ndkVersion '21.4.7075529'
}

Usage Examples

Model Initialization
public class MainActivity extends AppCompatActivity {
    private JniManager jniManager;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        // Get JNI manager instance
        jniManager = JniManager.getInstance();

        // Set SNPE environment
        String nativeLibDir = getApplicationInfo().nativeLibraryDir;
        jniManager.setAdspEnv(nativeLibDir);

        // Initialize face recognition
        String modelPath = getFilesDir() + "/model/face_model";
        String faceDbPath = getFilesDir() + "/model/facedb";

        int result = jniManager.initFaceRecognition(// pass model file paths as parameters);

        if (result == 0) {
            Log.d("EdgeComputingMarket", "Face recognition initialized successfully");
        } else {
            Log.e("EdgeComputingMarket", "Face recognition initialization failed: " + result);
        }
    }
}
Camera Stream Face Recognition

Using CameraX as example, requires importing relevant dependencies, you can also choose other camera implementation methods:

// Use CameraX to get preview frames
previewView.getPreviewStreamState().observe(this, state -> {
    // Camera ready
});

// Process frame data in ImageAnalysis
ImageAnalysis imageAnalysis = new ImageAnalysis.Builder()
    .setTargetResolution(new Size(640, 480))
    .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
    .build();

imageAnalysis.setAnalyzer(executor, image -> {
    // Get YUV data
    ImageProxy imageProxy = image;
    byte[] yuvData = yuv420ToNv21(imageProxy);

    // Call face recognition
    float[] results = jniManager.faceRecognitionFromCameraX(
        yuvData,
        imageProxy.getWidth(),
        imageProxy.getHeight(),
        getRotationDegrees()
    );

    // Parse results
    if (results != null && results.length > 1) {
        int faceCount = (int) results[0];
        int idx = 1;
        for (int i = 0; i < faceCount; i++) {
            float x = results[idx++];
            float y = results[idx++];
            float w = results[idx++];
            float h = results[idx++];
            int id = (int) results[idx++];
            // Skip score and keypoints
            idx += 2 + (int) results[idx - 1] * 2;

            Log.d("FaceResult", "Face " + i + ": pos=(" + x + "," + y +
                "), id=" + id);
        }
    }

    image.close();
});
Resource Release
@Override
protected void onDestroy() {
    super.onDestroy();
    if (jniManager != null) {
        jniManager.releaseCameraFaceRecog();
    }
}

Demo Demonstration

Select the desired function from the left tab. Local Picture is for selecting local images for recognition, Live Camera is for invoking the device camera for recognition, APM is for viewing device performance changes and runtime logs.

Select Local Picture, click the gray area in the Before region to upload an image.

Select Live Camera, click Start Camera to begin camera recognition.

Common Troubleshooting

Note: This project requires the use of system libraries, ensure targetSdk ≤ 30. Before installing the application, you need to connect to the device via adb and execute adb root command, then execute adb shell setenforce 0 command. Also, the ADSP environment variable must be set before initializing the algorithm's .so library for normal operation. Refer to the code above.

Problem Possible Cause Solution
Algorithm validity expired Algorithm has a one-hour validity period after initialization, will expire after timeout Exit current application and clear background to restart
UnsatisfiedLinkError .so library not loaded correctly Check jniLibs directory structure and architecture match, and whether environment variables are set
Model initialization failed (-1) Model file does not exist or path error Check if model files are correctly copied to device
Face database loading failed (-2) Face database image format incorrect Ensure using standard JPEG format images
Detection result is empty Image format or size incorrect, some cameras have unfixed angles causing rotation/mirroring issues in captured images Check YUV data format and rotation angle, correctly rotate input images
DSP unavailable Device does not support SNPE DSP Replace device