Langfuse

License: Unknown

Overall rating

7.9

Stars: 7869

Contributors: 55

Langfuse is a platform designed for managing, analyzing, and improving AI-driven applications. It provides tools for observability, debugging, and fine-tuning large language models (LLMs). By tracking interactions and model performance, Langfuse helps developers identify and resolve issues, optimize workflows, and enhance application quality. It supports seamless integration with various AI systems, ensuring efficient monitoring and diagnostics.

Key Features

  • LLM Observability: Instrument your app and start ingesting traces to Langfuse
  • Langfuse UI: Inspect and debug complex logs
  • Prompt Management: Manage, version and deploy prompts from within Langfuse
  • Prompt Engineering: Test and iterate on your prompts with the LLM Playground
  • LLM Analytics: Track metrics (cost, latency, quality) and gain insights from dashboards & data exports
  • LLM Evaluations: Collect and calculate scores for your LLM completions
    • Run and LLM-as-a-Judge within Langfuse
    • Collect user feedback
    • Manually score LLM outputs in Langfuse
  • Experiments: Track and test app behaviour before deploying a new version
    • Datasets let you test expected in and output pairs and benchmark performance before deploying
    • Track versions and releases in your application
Activity

Last update: Jan 13, 2025

  • Commits (last week)

    0

  • Resolved issues (last week)

    86

  • Merged PRs (last week)

    49

Maturity

Last update: Jan 18, 2025

  • Age

    1 year 8 months

  • Stability

    STABLE

Information

Funding

Has commercial version

Programming languages

TypeScript
JavaScript
Shell

Tags

prompt-engineering
llm-observability
gpt
evaluation
large-language-models
llama-index
analytics
llmops
langchain
ycombinator
open-source
self-hosted
prompt-management
llm-evaluation
playground
hacktoberfest
openai
monitoring
observability
llm