Need help to query external vector DB

I am trying to create an action to connect to an external vectordb. The reason behind this is we want to set up a RAG pipeline where documents are continuously synced to a DB. After a lot of research, I found that Vectara works via an API without a middleware service. I found a plugin for typingmind which I am trying to adapt to use with pickaxe but it isnt working.

Here’s the code I am using in my action: import os
import requests

def vectara(query: str):
“”"
Vectara Vector DB

Args:
    query (string): query
Envs:
    VECTARA_API_KEY (string): vectara api key
    CORPUS_KEY (string): corpus key
"""

api_key = os.environ.get('VECTARA_API_KEY')
if not api_key:
    return {"error": "Vectara API key is not set."}

corpus_key = os.environ.get('VECTARA_DEFAULT_CORPUS')
if not corpus_key:
    return {"error": "No corpus key provided or set as default."}

endpoint = f"https://api.vectara.io/v2/corpora/{corpus_key}/query"
headers = {
    'Content-Type': 'application/json',
    'x-api-key': api_key
}
payload = {
    "query": query,
    "search": {},
    "generation": {}
}

However I keep on getting this error when the action executes:
image

Any help would be deeply appreciated!

1 Like

@zerodot, what documents do you want to continuously sync, and what do you do with those documents once they are in the third-party vector store?

I work in the education sector where my team continuously creates content (ex. assessment questions) on a daily basis. We do it predominantly on Google docs which is store on the drive. We want all docs to be immidiately vectorized and be ready for retrieval. Solutions like context data let you connect different vector stores and embedding models.

Pickaxe is a great tool but its RAG isnt really great and we might need to develop our own custom pipeline. Where pickaxe helps a lot is that it has taken away the pain of frontend development and maintenance.

@nathaniel @nathanielmhld check this out! Requesting Api connection to External RAG system.

@zerodot I’m looking at unstructured.io to ingest any type of data and then sending it to a pinecone vector store. We can then build a pickaxe action to retrieve data from Pinecone.

Alternatively, you can build a Pinecone vector store and connect it to your google drive to have a dynamic vector store.

Considering your use case, I recommend something like Pinecone or Cohere that also offer reranking for better RAG quality.

@admin_mike @nathaniel @nathanielmhld unstructured.io could be a good option for files uploaded by users. It ingests images/video other than text and converts them into AI-friendly JSON’s for easier chunking and vectorization.

I’d love to hear your thoughts on this.

Thank you. Yes, I think we might need to try that. From what I understand, one also needs a middleware service to enable the LLM to communicate with the vectorDB like pinecone (https://platform.openai.com/docs/actions/data-retrieval). Yet to try it with pickaxe.

Have you tried connecting with Pinecone vector DB? Would we just use Pinecone and “Action?”

Though for most applications you should not try to do this because Pickaxe provides a directly managed vector db as part of our service by way of the knowledge base, it is possible to set up a connection with an external vector db through actions. However, it probably won’t be very simple. If someone does create such an action, please let us know and we’ll make a version of it public so that everyone can benefit!

1 Like

Hi @nathaniel, I have created an action that talks to the Pinecone assistant. I actually need this for a client that wants to use Pinecone.

BUT…I’m having problems printing any information other than the “content”. I want to print the citation and the file URL.

Any idea?

This is what Pinecone returns:

{
    "finish_reason": "stop",
    "message": {
        "role": "\"assistant\"",
        "content": "The Melt Flow Rate (MFR) for SUPEER™ 7358A, which is a measure of its melt index (MI), is 3.5 g/10 min at 190 °C and 2.16 kg load ."
    },
    "id": "00000000000000003905f6d65099910c",
    "model": "gpt-4o-2024-05-13",
    "usage": {
        "prompt_tokens": 17932,
        "completion_tokens": 63,
        "total_tokens": 17995
    },
    "citations": [
        {
            "position": 133,
            "references": [
                {
                    "file": {
                        "status": "Available",
                        "id": "64c73bf7-eb1a-47ba-bb9b-174c259fea7c",
                        "name": "SUPEER™ mLLDPE_7358A_Americas_Technical_Data_Sheet.pdf",
                        "size": 109892,
                        "metadata": null,
                        "updated_on": "2024-11-27T22:58:03.698981058Z",
                        "created_on": "2024-11-27T22:57:34.371168437Z",
                        "percent_done": 1.0,
                        "signed_url": "https://storage.googleapis.com/knowledge-prod-files/72d194e1-8a54-4684-bf8b-386c64f00cb7%2Fb16d32ca-d0a8-4328-91b5-798b60bcb853%2F64c73bf7-eb1a-47ba-bb9b-174c259fea7c.pdf?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=ke-prod-1%40pc-knowledge-prod.iam.gserviceaccount.com%2F20241128%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20241128T024002Z&X-Goog-Expires=3600&X-Goog-SignedHeaders=host&response-content-disposition=inline&response-content-type=application%2Fpdf&X-Goog-Signature=8c61bc6be80a45476a8aa7abf20b40f7ea98c0eaf2be4a8ac1949d98c4a3ebe1df77301aee5bac57f6d7a3342ad4dfedf75ece562004349d2c3f491d604ad2ce30a0d70ebecb7858177f03bf60ebf5519e11df6ce57f3e6ad74c71549909fbac0650838c686acc1deb154879b42a5806e7f110ef4f999177a2ef975e4f63e588f976c91f99bacd6aca910b2bdf39d89093a715e31c280356c79809e741c21b121b6dfb7be0fa48ecd560ed698cd24da53918e13a0014d9f34dab60ad55b3e41dbd455678f0b5aef3d057becb600ec7e14985403a7fdbb7c8c162643afb556e297dd97ec442c6321b8a7854c7ac3fd14a0a4437b9efe389107b869cc65b1b4dfe"
                    },
                    "pages": [
                        2,
                        1
                    ]
                }
            ]
        }
    ]
}

This is my action (for testing purposes I’m not using environment variables). It works but I can’t get Pickaxe to return any other info other than the content:

import os
import requests
import json

def pinecone_assistant(question: str):
    """
    Send the question to the Pinecone assistant to retrieve an answer

    Args:
        question (string): User's question
    """

    # Insert your PYTHON code below. You can access environment variables using os.environ[].
    # Currently, only the requests library is supported, but more libraries will be available soon.
    # Use print statements or return values to display results to the user.
    # If you save a png, pdf, csv, jpg, webp, gif, or html file in the root directory, it will be automatically displayed to the user.
    # You do not have to call this function as the bot will automatically call and fill in the parameters.
    
    # Pinecone Assistant API endpoint
    url = "https://prod-1-data.ke.pinecone.io/assistant/chat/test-xxx"

    # Payload containing the user's question
    payload = {
        "messages": [
            {
                "role": "user",
                "content": question
            }
        ],
        "stream": False,
        "model": "gpt-4o"
    }

    # Headers including the API key for authentication
    headers = {
        'Content-Type': 'application/json',
        'Authorization': 'Bearer pcsk_6T6btv_LEyA7BxeAdzErJoogVeVKRAhS94hMXFmAKFTiYJSptMBK9KiWo2ri2aqtXXXXXXXXX'
    }

    try:
        # Make the POST request
        response = requests.post(url, headers=headers, json=payload, timeout=10)

        # Parse the JSON response
        response_data = response.json()

        # Log the full response (for debugging)
        print("Full Response JSON:", json.dumps(response_data, indent=2))

        # Extract assistant reply
        assistant_reply = response_data.get("message", {}).get("content", "No response received.")

        # Extract file details (assuming first citation exists)
        citations = response_data.get("citations", [])
        if citations:
            first_citation = citations[0].get("references", [])[0].get("file", {})
            file_name = first_citation.get("name", "No file name available")
            file_url = first_citation.get("signed_url", "No file URL available")
        else:
            file_name = "No citations available"
            file_url = "No file URL available"

        # Return all details as plain text
        result = (
            f"Assistant Reply:\n{assistant_reply}\n\n"
            f"File Name: {file_name}\n"
            f"File URL: {file_url}"
        )
        return result

    except Exception as e:
        # Return error information
        return f"An error occurred: {e}"

# Example Usage
user_question = "What is the MI for SABIC Supeer 7358A?"
assistant_response = pinecone_assistant(user_question)

if assistant_response:
    print(assistant_response)

Thank you

@katrin_birkholz

2 Likes

So awesome @ab2308!

I’m not 100% sure, but I’ll give some thoughts, we can see if it works, and if not I’d be happy to jump on a quick call and we can debug together.

First, I would remove the last part of the above code. In our system you just write the function, we call it for you. So you shouldn’t actually call the function in the action code, if that makes sense. So remove:

# Example Usage
user_question = "What is the MI for SABIC Supeer 7358A?"
assistant_response = pinecone_assistant(user_question)

if assistant_response:
    print(assistant_response)

Second, cool citations. It looks like your printing them, which should work. But every time your print, AND every time you return, the AI gets that output. So I would stick to one or the other (printing or returning) results, so as not to double up. I would experiment with only giving it the citations, not the content, to make sure they’re being parsed correctly. Beyond that, could be more of a prompting issue.

Hope this helps, thanks for putting it together!

1 Like

I think you could create a Make.com scenario and upload the files to a Google Drive and add them to an OpenAi Vector Store.

Could we have an action that connects to a G Drive ?

@ihmunro I believe that talking to the Pinecone assistant is easier. Google often requires re-authentication considering you are accessing sensitive information and I find it painful to manage.

1 Like

Thanks AB

I agree Google is a pain. Could we use One Drive or Dropbox ?

It would be nice if there was a tool that would allow for the knowledge base to be added to, but as that is not available, need to come up with something else.

I will look at Pinecone - thanks

What about the ability to add to Open AI Vector Store ?

@ihmunro I don’t think we can send files to third party apps through pickaxes yet. The files get chunked and embedded in the vector store as soon as they are uploaded. The ones uploaded by users are temporarily stored for the duration of the chat.

If you are trying to build a dynamic RAG you have two options:

1\ Create a webpage and add files to it (you might want to consider Make for this). Use the Pickaxe webpage scraper and set it up to run at regular intervals

2\ The openAI vector store can ingest HTML and json and you can use APIs to add text (including the chat summary for example) into the file storage and your vector store. To retrieve any information, you will need to create a GPT assistant as well. Finally, you can create an action in Pickaxe that talks to the openAI assistant. There are too many steps for my liking. That’s why I would recommend Pinecone as they have created an assistant specifically designed to extract the right information from their vector stores and you can simply talk to it with a Pickaxe action.

1 Like

Appreciate it AB

So did you get it working with Pinecone ?

As it looked like the post you did above you were having some issues.

@ihmunro yes, just testing it a few more times

1 Like

Hi AB

Excellent - let me know it goes as I would be interested in trying it out.

Hi AB

Any update on this ?

I was also thinking as it looks like you are going direct to Pinecone - do you think about running a Make automation with Pinecone in the mix ?

@ihmunro hey mate, didn’t get the chance to make a video yet but here is the code.

Unless you need the response from Pinecone for something else I would talk directly with the Pinecone assistant without going through Make

import os
import requests
import json

def pinecone_assistant(question: str):
    """
    Send the question to the Pinecone assistant to retrieve an answer

    Args:
        question (string): User's question
    """

    # Insert your PYTHON code below. You can access environment variables using os.environ[].
    # Currently, only the requests library is supported, but more libraries will be available soon.
    # Use print statements or return values to display results to the user.
    # If you save a png, pdf, csv, jpg, webp, gif, or html file in the root directory, it will be automatically displayed to the user.
    # You do not have to call this function as the bot will automatically call and fill in the parameters.
    
    # Pinecone Assistant API endpoint (add correct assistant name created in Pinecone)
    url = "https://prod-1-data.ke.pinecone.io/assistant/chat/XXX"

    # Payload containing the user's question
    payload = {
        "messages": [
            {
                "role": "user",
                "content": question
            }
        ],
        "stream": False,
        "model": "gpt-4o"
    }

    # Headers including the API key for authentication (replace with your key)
    headers = {
        'Content-Type': 'application/json',
        'Authorization': 'Bearer xxxx_xxxxx_xxxxxxxxxxxxxxxxxx'
    }

    try:
        # Make the POST request
        response = requests.post(url, headers=headers, json=payload, timeout=10)

        # Parse the JSON response
        response_data = response.json()

        # Log the full response (for debugging)
        print("Full Response JSON:", json.dumps(response_data, indent=2))

        # Extract assistant reply
        assistant_reply = response_data.get("message", {}).get("content", "No response received.")

        # Extract file details (assuming first citation exists)
        citations = response_data.get("citations", [])
        if citations:
            first_citation = citations[0].get("references", [])[0].get("file", {})
            file_name = first_citation.get("name", "No file name available")
            file_url = first_citation.get("signed_url", "No file URL available")
        else:
            file_name = "No citations available"
            file_url = "No file URL available"

        # Return all details as plain text
        result = (
            f"""GIVE THE FOLLOWING REPLY DIRECTLY INCLUDING THE POSITION:
            {assistant_reply}
            
            At the end of your response, LINK THE FOLLOWING FILES, STARTING WITH THE POSITION:
            File Name: {file_name}
            File URL: {file_url}"""
        )
        return result

    except Exception as e:
        # Return error information
        return f"An error occurred: {e}"
2 Likes

Many thanks and yes, I agree, if you don’t need to do it in Make, go straight to Pinecone.

I will start laying the foundation for this but would be interested in seeing the video when you have time to get around to it.