Posts

Showing posts from May, 2024

Ch02:- Obstacles while creating chatbot

            Making a chatbot 1)First of all, We have build a dummy database of medical summary in MySql. 2) We have then connected that MySQL server to the Flask server which we have build. 3) In the 3rd step , we have passed this data to the genai . 4)The genai is able to processed this data and can  return it as json format (Because we asked this in prompt that we want this in json format). 5) But when we asked about our age or something like that related to medical test dataset. It's response is like we can't disobey the medical standard guidelines . It is coming something like this:- Completion(candidates=[],            result=None,            filters=[{'reason': <BlockedReason.SAFETY: 1>}],            safety_feedback=[{'rating': {'category': <HarmCategory.HARM_CATEGORY_MEDICAL: 5>,                   ...

C01:- Chatbot

Image
                       Workflow of Chatbot MySQL : For storing and managing user data. Python : For implementing the core logic of the chatbot. Large Language Model (LLM) : For generating human-like responses. Flask : For creating a web server that handles incoming requests. Detailed Workflow: Step 1: User Query Handling User Question: The user asks a question through the chatbot interface. Flask Server: The Flask server receives the user's question. Step 2: Processing Query Query Processing: The Flask server processes the user’s question. Depending on the question, it might: Directly fetch information from a pre-defined database (if needed). Generate a prompt for the Large Language Model (e.g., Gemini). Step 3: Generating Responses Prompt Creation: The Flask server prepares a prompt that includes the user's question. LLM Invocation: The prompt is sent to the Large Language Model. Response Generation: The LLM processes the prom...

M01:- Medical Report Summarization

Image
              Extraction of structured data from Medical Report In this blog, we will see how   we can actually extract the medical field data from the given medical report. In the era of digital healthcare, the ability to efficiently extract and analyze medical data is important. Whether its for research, diagnosis or patient care having structured access to medical information can  significantly enhance decision-making processes. Setup for the environment Before extraction process , we will ensure we will have necessary tools and libraries installed. -> We will use google colab for the working environment.        -> Then after this process we will install the necessary tool and libraries for our work.         Data Extraction Process -> After this task we will start by defining a function to extract patient data from Pdfs.This function will      utilize OCR to convert to sc...

T04:- Execution of existing model on the target device

Image
        Execution of  DLC model on Target device          In the last blog , we have seen how we can convert the existing  model into DLC. Now ,We will see how we can actually run the dlc file into Target Device. First, we have run this model on host device itself by using the tool which is given in the SNPE .   We have to use the path of container dlc path and input list path of running on the host.   --container -> It is the dlc file path.   --input_list -> It is the raw_list txt path.  -> Now to show the output on the screen we have to use another command for showing it onto the     screen.   Now we will see how we have actually run it on the Android Target . 1. First , we have to export the necessary libraries for running it on the Android device. 2. Second , we have to export the necessary executable files on the Android device. 3.Then we have made the necessary folders on the An...

T03:- How to convert given existing model into DLC format

Image
    Conversion of given model into DLC format We will try to convert the machine learning model which is given in the neural processing sdk into the Dlc format manually. -> We are doing this step because SDK tools and libraries are able to understand the code only in the      converted DLC format. -> We will first make directory like above structure in this path /opt/qcom/aistack/snpe/<Version_ number>/examples/Models/InceptionV3  -> We will make a folder data.        mkdir data && cd data  -> Then we will make another subfolder into data folder.         mkdir cropped  -> Then we will come out from that folder data.    -> After that we will make dlc and tensorflow folder in the InceptionV3 folder.         mkdir dlc && tensorflow After making all the required folders into InceptionV3 model , we will download InceptionV3 Model into the tmp ...