Saturday, April 18, 2026
HomeArtificial IntelligenceI Vibe Coded a Instrument to That Analyzes Buyer Sentiment and Subjects...

I Vibe Coded a Instrument to That Analyzes Buyer Sentiment and Subjects From Name Recordings

I Vibe Coded a Instrument to That Analyzes Buyer Sentiment and Subjects From Name Recordings
Picture by Creator

 

Introduction

 
On daily basis, customer support facilities file hundreds of conversations. Hidden in these audio information are goldmines of knowledge. Are clients happy? What issues do they point out most frequently? How do feelings shift throughout a name?
Manually analyzing these recordings is difficult. Nonetheless, with trendy synthetic intelligence (AI), we are able to routinely transcribe calls, detect feelings, and extract recurring subjects — all offline and with open-source instruments.

On this article, I’ll stroll you thru an entire buyer sentiment analyzer mission. You’ll discover ways to:

  • Transcribing audio information to textual content utilizing Whisper
  • Detecting sentiment (optimistic, adverse, impartial) and feelings (frustration, satisfaction, urgency)
  • Extracting subjects routinely utilizing BERTopic
  • Displaying leads to an interactive dashboard

The perfect half is that every thing runs domestically. Your delicate buyer knowledge by no means leaves your machine.

 

Dashboard overview showing sentiment gauge, emotion radar, and topic distribution
Fig 1: Dashboard overview exhibiting sentiment gauge, emotion radar, and matter distribution

 

Understanding Why Native AI Issues for Buyer Information

 
Cloud-based AI companies like OpenAI’s API are highly effective, however they arrive with issues corresponding to privateness points, the place buyer calls typically comprise private info; excessive value, the place you pay per-API-call pricing, which provides up shortly for prime volumes; and dependency on web fee limits. By operating domestically, it’s simpler to satisfy knowledge residency necessities.

This native AI speech-to-text tutorial retains every thing in your {hardware}. Fashions obtain as soon as and run offline ceaselessly.

 

System Architecture Overview showing how each component handles one task well. This modular design makes the system easy to understand, test, and extend
Fig 2: System Structure Overview exhibiting how every part handles one process effectively. This modular design makes the system straightforward to know, check, and prolong

 

// Stipulations

Earlier than beginning, ensure you have the next:

  • Python 3.9+ is put in in your machine.
  • You need to have FFmpeg put in for audio processing.
  • You need to have primary familiarity with Python and machine studying ideas.
  • You want about 2GB of disk area for AI fashions.

 

// Setting Up Your Venture

Clone the repository and arrange your setting:

git clone https://github.com/zenUnicorn/Buyer-Sentiment-analyzer.git

 

Create a digital setting:

 

Activate (Home windows):

 

Activate (Mac/Linux):

 

Set up dependencies:

pip set up -r necessities.txt

 

The primary run downloads AI fashions (~1.5GB whole). After that, every thing works offline.

 

Terminal showing successful installation
Fig 3: Terminal exhibiting profitable set up

 

Transcribing Audio with Whisper

 
Within the buyer sentiment analyzer, step one is to show spoken phrases from name recordings into textual content. That is carried out by Whisper, an automated speech recognition (ASR) system developed by OpenAI. Let’s look into the way it works, why it is a fantastic selection, and the way we use it within the mission.

Whisper is a Transformer-based encoder-decoder mannequin skilled on 680,000 hours of multilingual audio. Once you feed it an audio file, it:

  • Resamples the audio to 16kHz mono
  • Generates a mel spectrogram — a visible illustration of frequencies over time — which serves as a photograph of the sound
  • Splits the spectrogram into 30-second home windows
  • Passes every window by way of an encoder that creates hidden representations
  • Interprets these representations into textual content tokens, one phrase (or sub-word) at a time

Consider the mel spectrogram as how machines “see” sound. The x-axis represents time, the y-axis represents frequency, and coloration depth reveals quantity. The result’s a extremely correct transcript, even with background noise or accents.

Code Implementation

Here is the core transcription logic:

import whisper

class AudioTranscriber:
    def __init__(self, model_size="base"):
        self.mannequin = whisper.load_model(model_size)
   
    def transcribe_audio(self, audio_path):
        consequence = self.mannequin.transcribe(
            str(audio_path),
            word_timestamps=True,
            condition_on_previous_text=True
        )
        return {
            "textual content": consequence["text"],
            "segments": consequence["segments"],
            "language": consequence["language"]
        }

 

The model_size parameter controls accuracy vs. velocity.

 

Mannequin Parameters Velocity Greatest For
tiny 39M Quickest Fast testing
base 74M Quick Improvement
small 244M Medium Manufacturing
massive 1550M Gradual Most accuracy

 

For many use circumstances, base or small presents the most effective steadiness.

 

Transcription output showing timestamped segments
Fig 4: Transcription output exhibiting timestamped segments

 

Analyzing Sentiment with Transformers

 
With textual content extracted, we analyze sentiment utilizing Hugging Face Transformers. We use CardiffNLP’s RoBERTa mannequin, skilled on social media textual content, which is ideal for conversational buyer calls.

 

// Evaluating Sentiment and Emotion

Sentiment evaluation classifies textual content as optimistic, impartial, or adverse. We use a fine-tuned RoBERTa mannequin as a result of it understands context higher than easy key phrase matching.

The transcript is tokenized and handed by way of a Transformer. The ultimate layer makes use of a softmax activation, which outputs possibilities that sum to 1. For instance, if optimistic is 0.85, impartial is 0.10, and adverse is 0.05, then total sentiment is optimistic.

  • Sentiment: Total polarity (optimistic, adverse, or impartial) answering the query: “Is that this good or dangerous?”
  • Emotion: Particular emotions (anger, pleasure, concern) answering the query: “What precisely are they feeling?”

We detect each for full perception.

 

// Code Implementation for Sentiment Evaluation

from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch.nn.useful as F

class SentimentAnalyzer:
    def __init__(self):
        model_name = "cardiffnlp/twitter-roberta-base-sentiment-latest"
        self.tokenizer = AutoTokenizer.from_pretrained(model_name)
        self.mannequin = AutoModelForSequenceClassification.from_pretrained(model_name)
   
    def analyze(self, textual content):
        inputs = self.tokenizer(textual content, return_tensors="pt", truncation=True)
        outputs = self.mannequin(**inputs)
        possibilities = F.softmax(outputs.logits, dim=1)
       
        labels = ["negative", "neutral", "positive"]
        scores = {label: float(prob) for label, prob in zip(labels, possibilities[0])}
       
        return {
            "label": max(scores, key=scores.get),
            "scores": scores,
            "compound": scores["positive"] - scores["negative"]
        }

 

The compound rating ranges from -1 (very adverse) to +1 (very optimistic), making it straightforward to trace sentiment tendencies over time.

 

// Why Keep away from Easy Lexicon Strategies?

Conventional approaches like VADER rely optimistic and adverse phrases. Nonetheless, they typically miss context:

  • “This isn’t good.” Lexicon sees “good” as optimistic.
  • A transformer understands negation (“not”) as adverse.

Transformers perceive relationships between phrases, making them way more correct for real-world textual content.

 

Extracting Subjects with BERTopic

 
Realizing sentiment is helpful, however what are clients speaking about? BERTopic routinely discovers themes in textual content with out you having to pre-define them.

 

// How BERTopic Works

  • Embeddings: Convert every transcript right into a vector utilizing Sentence Transformers
  • Dimensional Discount: UMAP compresses these vectors right into a low-dimensional area
  • Clustering: HDBSCAN teams comparable transcripts collectively
  • Matter Illustration: For every cluster, extract essentially the most related phrases utilizing c-TF-IDF

The result’s a set of subjects like “billing points,” “technical help,” or “product suggestions.” In contrast to older strategies like Latent Dirichlet Allocation (LDA), BERTopic understands semantic which means. “Delivery delay” and “late supply” cluster collectively as a result of they share the identical which means.

Code Implementation

From subjects.py:

from bertopic import BERTopic

class TopicExtractor:
    def __init__(self):
        self.mannequin = BERTopic(
            embedding_model="all-MiniLM-L6-v2",
            min_topic_size=2,
            verbose=True
        )
   
    def extract_topics(self, paperwork):
        subjects, possibilities = self.mannequin.fit_transform(paperwork)
       
        topic_info = self.mannequin.get_topic_info()
        topic_keywords = {
            topic_id: self.mannequin.get_topic(topic_id)[:5]
            for topic_id in set(subjects) if topic_id != -1
        }
       
        return {
            "assignments": subjects,
            "key phrases": topic_keywords,
            "distribution": topic_info
        }

 

Be aware: Matter extraction requires a number of paperwork (at the least 5-10) to search out significant patterns. Single calls are analyzed utilizing the fitted mannequin.

 

Topic distribution bar chart showing billing, shipping, and technical support categories
Fig 5: Matter distribution bar chart exhibiting billing, transport, and technical help classes

 

Constructing an Interactive Dashboard with Streamlit

 
Uncooked knowledge is tough to course of. We constructed a Streamlit dashboard (app.py) that lets enterprise customers discover outcomes. Streamlit turns Python scripts into net purposes with minimal code. Our dashboard gives:

  • Add interface for audio information
  • Actual-time processing with progress indicators
  • Interactive visualizations utilizing Plotly
  • Drill-down functionality to discover particular person calls

 

// Code Implementation for Dashboard Construction

import streamlit as st

def major():
    st.title("Buyer Sentiment Analyzer")
   
    uploaded_files = st.file_uploader(
        "Add Audio Recordsdata",
        kind=["mp3", "wav"],
        accept_multiple_files=True
    )
   
    if uploaded_files and st.button("Analyze"):
        with st.spinner("Processing..."):
            outcomes = pipeline.process_batch(uploaded_files)
       
        # Show outcomes
        col1, col2 = st.columns(2)
        with col1:
            st.plotly_chart(create_sentiment_gauge(outcomes))
        with col2:
            st.plotly_chart(create_emotion_radar(outcomes))

 

Streamlit’s caching @st.cache_resource ensures fashions load as soon as and persist throughout interactions, which is crucial for a responsive consumer expertise.

 

Full dashboard with sidebar options and multiple visualization tabs
Fig 7: Full dashboard with sidebar choices and a number of visualization tabs

 

// Key Options

  • Add audio (or use pattern transcripts for testing)
  • View transcript with sentiment highlights
  • Emotion timeline (if name is lengthy sufficient)
  • Matter visualization utilizing Plotly interactive charts

 

// Caching for Efficiency

Streamlit re-runs the script on each interplay. To keep away from reprocessing heavy fashions, we use @st.cache_resource:

@st.cache_resource
def load_models():
    return CallProcessor()

processor = load_models()

 

 

// Actual-Time Processing

When a consumer uploads a file, we present a spinner whereas processing, then instantly show outcomes:

if uploaded_file:
    with st.spinner("Transcribing and analyzing..."):
        consequence = processor.process_file(uploaded_file)
    st.success("Carried out!")
    st.write(consequence["text"])
    st.metric("Sentiment", consequence["sentiment"]["label"])

 

Reviewing Sensible Classes

 
Audio Processing: From Waveform to Textual content

Whisper’s magic is in its mel spectrogram conversion. Human listening to is logarithmic, which means we’re higher at recognizing low frequencies than excessive ones. The mel scale mimics this, so the mannequin “hears” extra like a human. The spectrogram is actually a 2D picture (time vs. frequency), which the Transformer encoder processes equally to how it will course of a picture patch. This is the reason Whisper handles noisy audio effectively; it sees the entire image.

 

// Transformer Outputs: Softmax vs. Sigmoid

  • Softmax (sentiment): Forces possibilities to sum to 1. That is best for mutually unique courses, as a sentence normally is not each optimistic and adverse.
  • Sigmoid (feelings): Treats every class independently. A sentence could be joyful and stunned on the identical time. Sigmoid permits for this overlap.

Selecting the best activation is crucial in your drawback area.

 

// Speaking Insights with Visualization

An excellent dashboard does greater than present numbers; it tells a narrative. Plotly charts are interactive; customers can hover to see particulars, zoom into time ranges, and click on legends to toggle knowledge sequence. This transforms uncooked analytics into actionable insights.

 

// Working the Software

To run the applying, observe the steps from the start of this text. Check the sentiment and emotion evaluation with out audio information:

 

This runs pattern textual content by way of the pure language processing (NLP) fashions and shows leads to the terminal.

Analyze a single recording:

python major.py --audio path/to/name.mp3

 

Batch course of a listing:

python major.py --batch knowledge/audio/

 

For the complete interactive expertise:

python major.py --dashboard

 

Open http://localhost:8501 in your browser.

 

Terminal output showing successful analysis with sentiment scores
Fig 8: Terminal output exhibiting profitable evaluation with sentiment scores

 

Conclusion

 
We’ve got constructed an entire, offline-capable system that transcribes buyer calls, analyzes sentiment and feelings, and extracts recurring subjects — all with open-source instruments. This can be a production-ready basis for:

  • Buyer help groups figuring out ache factors
  • Product managers gathering suggestions at scale
  • High quality assurance monitoring agent efficiency

The perfect half? Every thing runs domestically, respecting consumer privateness and eliminating API prices.

The entire code is out there on GitHub: An-AI-that-Analyze-customer-sentiment. Clone the repository, observe this native AI speech-to-text tutorial, and begin extracting insights out of your buyer calls in the present day.
 
 

Shittu Olumide is a software program engineer and technical author captivated with leveraging cutting-edge applied sciences to craft compelling narratives, with a eager eye for element and a knack for simplifying advanced ideas. It’s also possible to discover Shittu on Twitter.


RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments