Saturday, June 28, 2025
HomeArtificial IntelligenceGetting Began with MLFlow for LLM Analysis

Getting Began with MLFlow for LLM Analysis

MLflow is a strong open-source platform for managing the machine studying lifecycle. Whereas it’s historically used for monitoring mannequin experiments, logging parameters, and managing deployments, MLflow has lately launched help for evaluating Giant Language Fashions (LLMs).

On this tutorial, we discover tips on how to use MLflow to judge the efficiency of an LLM—in our case, Google’s Gemini mannequin—on a set of fact-based prompts. We’ll generate responses to fact-based prompts utilizing Gemini and assess their high quality utilizing a wide range of metrics supported instantly by MLflow.

Establishing the dependencies

For this tutorial, we’ll be utilizing each the OpenAI and Gemini APIs. MLflow’s built-in generative AI analysis metrics at the moment depend on OpenAI fashions (e.g., GPT-4) to behave as judges for metrics like reply similarity or faithfulness, so an OpenAI API key’s required. You’ll be able to get hold of:

Putting in the libraries

pip set up mlflow openai pandas google-genai

Setting the OpenAI and Google API Keys as surroundings variable

import os
from getpass import getpass

os.environ["OPENAI_API_KEY"] = getpass('Enter OpenAI API Key:')
os.environ["GOOGLE_API_KEY"] = getpass('Enter Google API Key:')

Making ready Analysis Information and Fetching Outputs from Gemini

import mlflow
import openai
import os
import pandas as pd
from google import genai

Creating the analysis knowledge

On this step, we outline a small analysis dataset containing factual prompts together with their appropriate floor reality solutions. These prompts span subjects equivalent to science, well being, internet growth, and programming. This structured format permits us to objectively evaluate the Gemini-generated responses in opposition to identified appropriate solutions utilizing numerous analysis metrics in MLflow.

eval_data = pd.DataFrame(
    {
        "inputs": [
            "Who developed the theory of general relativity?",
            "What are the primary functions of the liver in the human body?",
            "Explain what HTTP status code 404 means.",
            "What is the boiling point of water at sea level in Celsius?",
            "Name the largest planet in our solar system.",
            "What programming language is primarily used for developing iOS apps?",
        ],
        "ground_truth": [
            "Albert Einstein developed the theory of general relativity.",
            "The liver helps in detoxification, protein synthesis, and production of biochemicals necessary for digestion.",
            "HTTP 404 means 'Not Found' -- the server can't find the requested resource.",
            "The boiling point of water at sea level is 100 degrees Celsius.",
            "Jupiter is the largest planet in our solar system.",
            "Swift is the primary programming language used for iOS app development."
        ]
    }
)

eval_data

Getting Gemini Responses

This code block defines a helper perform gemini_completion() that sends a immediate to the Gemini 1.5 Flash mannequin utilizing the Google Generative AI SDK and returns the generated response as plain textual content. We then apply this perform to every immediate in our analysis dataset to generate the mannequin’s predictions, storing them in a brand new “predictions” column. These predictions will later be evaluated in opposition to the bottom reality solutions

shopper = genai.Consumer()
def gemini_completion(immediate: str) -> str:
    response = shopper.fashions.generate_content(
        mannequin="gemini-1.5-flash",
        contents=immediate
    )
    return response.textual content.strip()

eval_data["predictions"] = eval_data["inputs"].apply(gemini_completion)
eval_data

Evaluating Gemini Outputs with MLflow

On this step, we provoke an MLflow run to judge the responses generated by the Gemini mannequin in opposition to a set of factual ground-truth solutions. We use the mlflow.consider() technique with 4 light-weight metrics: answer_similarity (measuring semantic similarity between the mannequin’s output and the bottom reality), exact_match (checking for word-for-word matches), latency (monitoring response technology time), and token_count (logging the variety of output tokens).

It’s necessary to notice that the answer_similarity metric internally makes use of OpenAI’s GPT mannequin to guage the semantic closeness between solutions, which is why entry to the OpenAI API is required. This setup gives an environment friendly solution to assess LLM outputs with out counting on customized analysis logic. The ultimate analysis outcomes are printed and in addition saved to a CSV file for later inspection or visualization.

mlflow.set_tracking_uri("mlruns")
mlflow.set_experiment("Gemini Easy Metrics Eval")

with mlflow.start_run():
    outcomes = mlflow.consider(
        model_type="question-answering",
        knowledge=eval_data,
        predictions="predictions",
        targets="ground_truth",
        extra_metrics=[
          mlflow.metrics.genai.answer_similarity(),
          mlflow.metrics.exact_match(),
          mlflow.metrics.latency(),
          mlflow.metrics.token_count()
      ]
    )
    print("Aggregated Metrics:")
    print(outcomes.metrics)

    # Save detailed desk
    outcomes.tables["eval_results_table"].to_csv("gemini_eval_results.csv", index=False)

To view the detailed outcomes of our analysis, we load the saved CSV file right into a DataFrame and alter the show settings to make sure full visibility of every response. This permits us to examine particular person prompts, Gemini-generated predictions, floor reality solutions, and the related metric scores with out truncation, which is very useful in pocket book environments like Colab or Jupyter.

outcomes = pd.read_csv('gemini_eval_results.csv')
pd.set_option('show.max_colwidth', None)
outcomes

Try the Codes right here. All credit score for this analysis goes to the researchers of this venture. Additionally, be at liberty to comply with us on Twitter and don’t overlook to affix our 100k+ ML SubReddit and Subscribe to our Publication.


I’m a Civil Engineering Graduate (2022) from Jamia Millia Islamia, New Delhi, and I’ve a eager curiosity in Information Science, particularly Neural Networks and their utility in numerous areas.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments