Posts

Showing posts from April, 2024

T02:- How to setup the Qualcomm Neural Processing SDK

Image
    Setup of Qualcomm Neural Processing SDK We will now understand how we can setup the Qualcomm Neural Processing Engine SDK in our system   -> To start  with first thing we have to understand what is required for this setup.       1 . Oracle Virtual Box  https://www.virtualbox.org/wiki/Downloads       2 . Ubuntu 20.04   https://releases.ubuntu.com/focal/      We can also work with WSL2 environment that will work on Windows 10/11.       3 . Any Machine Learning Framework         3.1 Tensorflow         3.2 Pytorch         3.3 Tensorflow lite         3.4 Onnx       4 . Android NDK       5 . Microsoft Visual C++(Resdistrubution) So now we will go to the practical aspect of the setup of the Neural Processing SDK(SNPE). 1 .We will first download the Virtual box as per the requirement. ...

T01:- Introduction to Ondevice AI

Image
                Understanding Of On-device Ai Smart stuff like phones and cars are getting smarter . They can crunch data and learn without needing the internet all the time. Qualcomm is a big part of making this happens.                                       Fig - 1.1(On-device AI) So, What is On-device AI? On-device means smart devices like phones or smartwatches can work without need of constant internet connection. Its like having a live brain inside your gadgets, making quick decisions and performing task without the need of live servers. This technology is good for things like voice assistants and image recognition , as it reduces the latency and ensures privacy by keeping data local.                                           Fig- 1.2 ( ON-...

Integration Workflow of Ai engine sdk

Image
                        Integration Workflow . Training and Inference workflow:- This diagram illustrates the difference between training a neural       network(done on server) and running inference(making predictions) on device. Training involves creating and optimizing a neural network model using large datasets, typically done off device. Inference is the process of using the trained model to make predictions or decisions based on new data which is usually done on device where the application runs. .Qualcomm AI engine Direct integration workflow:-  It details how to integrate trained deep learning(DL networks) into applications using Qualcomm AI engine SDK. It involves converting the trained model into format suitable  for execution on the Qualcomm chipsets and integrating them into applications. Now, lets connect these workflows with an explanation of the integration process. Integration WorkST...

Software Architecture of Qualcomm AI engine SDK

Image
         Software Architecture of AI engine SDK The Software architecture of Qualcomm Ai engine Direct API and its associated software stack , which facilitates the construction , optimization and execution of network models on the hardware accelerator cores.  Key components of Software stack:- 1. Device:- Its virtual representation of the hardware where all the action happens. It helps  manages and organize the hardware resources(like cores) needed to run your AI programs. 2. Backend:- Its the main hub that oversees everything and manages all the tools needed to run your AI programs effectively. It keeps tracks of all the operations you can do and make sure everything runs smoothly. 3. Context :- It can be visualize as where your AI program lives and breathes. It holds AI networks (the brain of AI model)and helps them to talk each other and share information. 4. Graph :- It is kind of blueprint of your AI program. It's like a map that shows how everyt...

High Level working of Qualcomm AI Engine SDK

Image
   Working of AI Engine SDK  Qualcomm AI Engine Directs works through something called "backends" which are basically software components that help the AI Engine communicate with different parts of the Qualcomm processors.  These backends are often packaged as shared libraries, which are like bundles of code that can be easily used by other programs. There are different types of backends each designed for a specific hardware accelerator core: 1.CPU backends:- It is for Snapdragon CPU.It helps AI Engine SDK works better with CPU. 2.DSP backends:- This one is for the Hexagon Dsp accelerator.It helps the AI  engine use the DSP efficiently. 3.GPU backends:- This is for the Adreno GPU.It helps the AI Engine utilize the GPU's power. 4.Htp backends :- This is for the Hexagon tensor processor. It helps the AI Engine communicate with the HTP. 5.HTA backends:- This one is for the Snapdragon Hta accelerator. 6.Lpai backends:- This one is for the Snapdragon Lpai accelerator...

Qualcomm AI Engine Direct Sdk

      Intro to Ai Engine SDK It is also called Qualcomm neural network  in the source code . Ai engine SDK is a toolkit that help developers work more closely with the hardware in Qualcomm processors to make their AI models run faster and more efficiently. Here What it does:- 1.Direct access to Hardware :- It gives developers direct access to different parts of the Qualcomm processors that are good at handling AI tasks. These parts include kryo CPU, Adreno GPU and Hexagon Processor. 2.Improved Performance:- By tapping into specific parts of the processors, developers can make their AI model run faster and better. 3.Flexible Usage:-  Developers can choose to focus on a specific parts of the processors, or they can let the SDK decide which part is best for the tasks . For eg:- They can choose to use the Hexagon Processor directly for the certain tasks. 4.Modular and Reusable:- The SDK is designed in such a way that developers can easily reuse parts of their code f...