

Picture by Editor | ChatGPT
# Introduction
 
Prepared for a sensible walkthrough with little to no code concerned, relying on the method you select? This tutorial exhibits the best way to tie collectively two formidable instruments — OpenAI‘s GPT fashions and the Airtable cloud-based database — to prototype a easy, toy-sized retrieval-augmented era (RAG) system. The system accepts question-based prompts and makes use of textual content information saved in Airtable because the data base to supply grounded solutions. When you’re unfamiliar with RAG methods, or desire a refresher, don’t miss this article sequence on understanding RAG.
# The Components
 
To observe this tutorial your self, you may want:
- An Airtable account with a base created in your workspace.
 - An OpenAI API key (ideally a paid plan for flexibility in mannequin selection).
 - A Pipedream account — an orchestration and automation app that permits experimentation underneath a free tier (with limits on day by day runs).
 
# The Retrieval-Augmented Technology Recipe
 
The method to construct our RAG system isn’t purely linear, and a few steps will be taken in several methods. Relying in your degree of programming data, it’s possible you’ll go for a code-free or practically code-free method, or create the workflow programmatically.
In essence, we’ll create an orchestration workflow consisting of three elements, utilizing Pipedream:
- Set off: just like an online service request, this aspect initiates an motion circulation that passes by the subsequent components within the workflow. As soon as deployed, that is the place you specify the request, i.e., the person immediate for our prototype RAG system.
 - Airtable block: establishes a connection to our Airtable base and particular desk to make use of its information because the RAG system’s data base. We’ll add some textual content information to it shortly inside Airtable.
 - OpenAI block: connects to OpenAI’s GPT-based language fashions utilizing an API key and passes the person immediate alongside the context (retrieved Airtable information) to the mannequin to acquire a response.
 
However first, we have to create a brand new desk in our Airtable base containing textual content information. For this instance, I created an empty desk with three fields (ID: one-line textual content, Supply: one-line textual content, Content material: lengthy textual content), after which imported information from this publicly obtainable small dataset containing textual content with primary data about Asian international locations. Use the CSV and hyperlink choices to import the information into the desk. Extra details about creating tables and importing information is in this text.
After you have your data base prepared in Airtable, let’s swap to Pipedream for the remainder of the method! Register should you haven’t finished so but, and log in. There are a number of methods to provoke a workflow. Considered one of them is by describing what you wish to create to Pipedream’s AI agent, which is intuitive however might end in errors needing fixes, as we’ll illustrate shortly. Pipedream has two parallel UIs: one for manually constructing and enhancing workflows, and one for conversations with its AI agent, known as String.


Pipedream’s AI conversational interface (String)
When you go for letting String construct an approximate model of your required workflow, describe clearly what to construct, with a immediate which will look one thing like:
I would wish to orchestrate a easy toy RAG software that makes use of an Airtable desk containing textual content, and OpenAI’s GPT fashions (I’ve a paid API key).
Pipedream’s language mannequin might return a solution so that you can approve (at your individual threat), like this:
I will create a easy RAG (Retrieval-Augmented Technology) agent that connects your Airtable desk with OpenAI’s GPT fashions. Here is what I will construct:
Arrange a set off to just accept queries (through webhook or guide execution)
Search and retrieve related textual content out of your Airtable desk primarily based on the question
Use the retrieved textual content as context for OpenAI GPT to generate an knowledgeable response
Return the AI-generated reply that is grounded in your Airtable information
This provides you with a primary RAG system the place you may ask questions and get solutions primarily based on the content material saved in your Airtable, enhanced by GPT’s reasoning capabilities.
For the sake of experimentation and “seeing what occurs,” approve the recommended plan of action to acquire a three-block workflow just like the one within the picture proven above.
Since this isn’t the core Pipedream editor UI, swap to it by clicking “Open Pipedream” within the higher proper nook. A brand new tab will open with the precise Pipedream workflow editor.
For the set off block, a URL is routinely generated with a syntax just like this one I obtained for mine: https://eoupscprutt37xx.m.pipedream.internet. Click on it and, within the settings pane that opens on the right-hand facet, guarantee the primary couple of choices are set to “Full HTTP request” and “Return a static response.”
For the second block (Airtable motion) there could also be somewhat work to do. First, connect with your Airtable base. When you’re working in the identical browser, this is likely to be simple: register to Airtable from the pop-up window that seems after clicking “Join new account,” then observe the on-screen steps to specify the bottom and desk to entry:


Pipedream workflow editor: connecting to Airtable
Right here comes the tough half (and a cause I deliberately left an imperfect immediate earlier when asking the AI agent to construct the skeleton workflow): there are a number of sorts of Airtable actions to select from, and the particular one we want for a RAG-style retrieval mechanism is “Checklist data.” Chances are high, this isn’t the motion you see within the second block of your workflow. If that’s the case, take away it, add a brand new block within the center, choose “Airtable,” and select “Checklist data.” Then reconnect to your desk and take a look at the connection to make sure it really works.
That is what a efficiently examined connection seems to be like:


Pipedream workflow editor: testing connection to Airtable
Final, arrange and configure OpenAI entry to GPT. Maintain your API key useful. In case your third block’s secondary label isn’t “Generate RAG response,” take away the block and change it with a brand new OpenAI block with this subtype.
Begin by establishing an OpenAI connection utilizing your API key:


Establishing OpenAI connection
The person query area must be set as {{ steps.set off.occasion.physique.take a look at }}, and the data base data (your textual content “paperwork” for RAG from Airtable) have to be set as {{ steps.list_records.$return_value }}.
You may maintain the remaining as default and take a look at, however it’s possible you’ll encounter parsing errors widespread to those sorts of workflows, prompting you to leap again to String for assist and computerized fixes utilizing the AI agent. Alternatively, you may immediately copy and paste the next into the OpenAI part’s code area on the backside for a sturdy answer:
import openai from "@pipedream/openai"
export default defineComponent({
  title: "Generate RAG Response",
  description: "Generate a response utilizing OpenAI primarily based on person query and Airtable data base content material",
  sort: "motion",
  props: {
    openai,
    mannequin: {
      propDefinition: [
        openai,
        "chatCompletionModelId",
      ],
    },
    query: {
      sort: "string",
      label: "Person Query",
      description: "The query from the webhook set off",
      default: "{{ steps.set off.occasion.physique.take a look at }}",
    },
    knowledgeBaseRecords: {
      sort: "any",
      label: "Information Base Information",
      description: "The Airtable data containing the data base content material",
      default: "{{ steps.list_records.$return_value }}",
    },
  },
  async run({ $ }) {
    // Extract person query
    const userQuestion = this.query;
    
    if (!userQuestion) {
      throw new Error("No query offered from the set off");
    }
    // Course of Airtable data to extract content material
    const data = this.knowledgeBaseRecords;
    let knowledgeBaseContent = "";
    
    if (data && Array.isArray(data)) {
      knowledgeBaseContent = data
        .map(file => {
          // Extract content material from fields.Content material
          const content material = file.fields?.Content material;
          return content material ? content material.trim() : "";
        })
        .filter(content material => content material.size > 0) // Take away empty content material
        .be a part of("nn---nn"); // Separate totally different data base entries
    }
    if (!knowledgeBaseContent) {
      throw new Error("No content material present in data base data");
    }
    // Create system immediate with data base context
    const systemPrompt = `You're a useful assistant that solutions questions primarily based on the offered data base. Use solely the knowledge from the data base beneath to reply questions. If the knowledge isn't obtainable within the data base, please say so.
Information Base:
${knowledgeBaseContent}
Directions:
- Reply primarily based solely on the offered data base content material
- Be correct and concise
- If the reply isn't within the data base, clearly state that the knowledge isn't obtainable
- Cite related elements of the data base when potential`;
    // Put together messages for OpenAI
    const messages = [
      {
        role: "system",
        content: systemPrompt,
      },
      {
        role: "user",
        content: userQuestion,
      },
    ];
    // Name OpenAI chat completion
    const response = await this.openai.createChatCompletion({
      $,
      information: {
        mannequin: this.mannequin,
        messages: messages,
        temperature: 0.7,
        max_tokens: 1000,
      },
    });
    const generatedResponse = response.generated_message?.content material;
    if (!generatedResponse) {
      throw new Error("Did not generate response from OpenAI");
    }
    // Export abstract for person suggestions
    $.export("$abstract", `Generated RAG response for query: "${userQuestion.substring(0, 50)}${userQuestion.size > 50 ? '...' : ''}"`);
    // Return the generated response
    return {
      query: userQuestion,
      response: generatedResponse,
      model_used: this.mannequin,
      knowledge_base_entries: data ? data.size : 0,
      full_openai_response: response,
    };
  },
})
If no errors or warnings seem, try to be prepared to check and deploy. Deploy first, after which take a look at by passing a person question like this within the newly opened deployment tab:


Testing deployed workflow with a immediate asking what’s the capital of Japan
If the request is dealt with and all the pieces runs appropriately, scroll all the way down to see the response returned by the GPT mannequin accessed within the final stage of the workflow:


GPT mannequin response
Effectively finished! This response is grounded within the data base we inbuilt Airtable, so we now have a easy prototype RAG system that mixes Airtable and GPT fashions through Pipedream.
# Wrapping Up
 
This text confirmed the best way to construct, with little or no coding, an orchestration workflow to prototype a RAG system that makes use of Airtable textual content databases because the data base for retrieval and OpenAI’s GPT fashions for response era. Pipedream permits defining orchestration workflows programmatically, manually, or aided by its conversational AI agent. By the writer’s experiences, we succinctly showcased the professionals and cons of every method.
 
 
Iván Palomares Carrascosa is a pacesetter, author, speaker, and adviser in AI, machine studying, deep studying & LLMs. He trains and guides others in harnessing AI in the true world.
