Integration Workflow of Ai engine sdk
Integration Workflow
. Training and Inference workflow:- This diagram illustrates the difference between training a neural network(done on server) and running inference(making predictions) on device. Training involves creating
and optimizing a neural network model using large datasets, typically done off device.
Inference is the process of using the trained model to make predictions or decisions based on new data
which is usually done on device where the application runs.
.Qualcomm AI engine Direct integration workflow:- It details how to integrate trained deep learning(DL networks) into applications using Qualcomm AI engine SDK.
It involves converting the trained model into format suitable for execution on the Qualcomm chipsets and integrating them into applications.
Now, lets connect these workflows with an explanation of the integration process.
Integration WorkSTEPS:-
1.Conversion:- Clients start by using the Qualcomm AI engine direct convertor tool to convert their trained network model into format compatible with the SDK.
2.OpPackage Definition:- If the source model contains operations not natively supported by the Qualcomm Ai engine backend, clients provide OpPackage definition files to the convertor . These define custom operations.
3.Model convertor output:- The convertor outputs a '.cpp' file containing API calls to construct a network graph and a '.bin' file which contains networks weights and biases.
4.Model library generation:- Optionally, client can use the library generator tool to produce a model library.
5.Integration:- Clients integrate the Qualcomm Ai engine Direct model into their application by dynamically loading the model library file or compiling and statically linking the '.cpp' and '.bin' file.
6.Execution:- To run the inference , clients load the required backend accelerator and OpPackage libraries, which are registered with and loaded by the backend.
In simpler terms, this integration workflows guides developers through the process of preparing , converting and integrating trained neural networks models into their applications , ensuring compatibility with Qualcomm chipsets and efficient execution on device.
Comments
Post a Comment