0.0
A long-lived project that still receives updates
SmartPrompt provides a flexible DSL for managing prompts, interacting with multiple LLMs, and creating composable task workers.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Dependencies

Runtime

~> 2.12.0
~> 3.1.2
~> 0.9.2.1
~> 8.1.0
~> 1.0.5
~> 0.4.0
 Project Readme

EN | δΈ­ζ–‡

SmartPrompt

Gem Version License: MIT

SmartPrompt is a powerful Ruby gem that provides an elegant domain-specific language (DSL) for building intelligent applications with Large Language Models (LLMs). It enables Ruby programs to seamlessly interact with various LLM providers while maintaining clean, composable, and highly customizable code architecture.

πŸš€ Key Features

Multi-LLM Support

  • OpenAI API Compatible: Full support for OpenAI GPT models and compatible APIs
  • Llama.cpp Integration: Direct integration with local Llama.cpp servers
  • Extensible Adapters: Easy-to-extend adapter system for new LLM providers
  • Unified Interface: Same API regardless of the underlying LLM provider

Flexible Architecture

  • Worker-based Tasks: Define reusable workers for specific AI tasks
  • Template System: ERB-based prompt templates with parameter injection
  • Conversation Management: Built-in conversation history and context management
  • Streaming Support: Real-time response streaming for better user experience

Advanced Features

  • Tool Calling: Native support for function calling and tool integration
  • Retry Logic: Robust error handling with configurable retry mechanisms
  • Embeddings: Text embedding generation for semantic search and RAG applications
  • Configuration-driven: YAML-based configuration for easy deployment management

Production Ready

  • Comprehensive Logging: Detailed logging for debugging and monitoring
  • Error Handling: Graceful error handling with custom exception types
  • Performance Optimized: Efficient resource usage and response caching
  • Thread Safe: Safe for concurrent usage in multi-threaded applications

πŸ“¦ Installation

Add to your Gemfile:

gem 'smart_prompt'

Then execute:

$ bundle install

Or install directly:

$ gem install smart_prompt

πŸ› οΈ Quick Start

1. Configuration

Create a YAML configuration file (config/smart_prompt.yml):

# Adapter definitions
adapters:
  openai: OpenAIAdapter
# LLM configurations
llms:
  SiliconFlow:
    adapter: openai
    url: https://api.siliconflow.cn/v1/
    api_key: ENV["APIKey"]
    default_model: Qwen/Qwen2.5-7B-Instruct
  llamacpp:
    adapter: openai
    url: http://localhost:8080/    
  ollama:
    adapter: openai
    url: http://localhost:11434/
    default_model: deepseek-r1
  deepseek:
    adapter: openai
    url: https://api.deepseek.com
    api_key: ENV["DSKEY"]
    default_model: deepseek-reasoner

# Default settings
default_llm: SiliconFlow
template_path: "./templates"
worker_path: "./workers"
logger_file: "./logs/smart_prompt.log"

2. Create Prompt Templates

Create template files in your templates/ directory:

templates/chat.erb:

You are a helpful assistant. Please respond to the following question:

Question: <%= question %>

Context: <%= context || "No additional context provided" %>

3. Define Workers

Create worker files in your workers/ directory:

workers/chat_worker.rb:

SmartPrompt.define_worker :chat_assistant do
  # Use a specific LLM
  use "SiliconFlow"
  model "deepseek-ai/DeepSeek-V3"
  # Set system message
  sys_msg("You are a helpful AI assistant.", params)
  # Use template with parameters  
  prompt(:chat, {
    question: params[:question],
    context: params[:context]
  })
  # Send message and return response
  send_msg
end

4. Use in Your Application

require 'smart_prompt'

# Initialize engine with config
engine = SmartPrompt::Engine.new('config/smart_prompt.yml')

# Execute worker
result = engine.call_worker(:chat_assistant, {
  question: "What is machine learning?",
  context: "We're discussing AI technologies"
})

puts result

πŸ“š Advanced Usage

Streaming Responses

# Define streaming worker
SmartPrompt.define_worker :streaming_chat do
  use "deepseek"
  model "deepseek-chat"
  sys_msg("You are a helpful assistant.")
  prompt(params[:message])
  send_msg
end

# Use with streaming
engine.call_worker_by_stream(:streaming_chat, {
  message: "Tell me a story"
}) do |chunk, bytesize|
  print chunk.dig("choices", 0, "delta", "content")
end

Tool Integration

# Define worker with tools
SmartPrompt.define_worker :assistant_with_tools do
  use "SiliconFlow"
  model "Qwen/Qwen3-235B-A22B"
  tools = [
    {
      type: "function",
      function: {
        name: "get_weather",
        description: "Get weather information for a location",
        parameters: {
          type: "object",
          properties: {
            location: {
              type: "string", 
              description: "The city and state"
            }
          },
          required: ["location"]
        }
      }
    }
  ]
  
  sys_msg("You can help with weather queries using available tools.", params)
  prompt(params[:message])
  params.merge(tools: tools)
  send_msg
end

Conversation History

SmartPrompt.define_worker :conversational_chat do
  use "deepseek"
  model "deepseek-chat"
  sys_msg("You are a helpful assistant that remembers conversation context.")
  prompt(params[:message], with_history: true)
  send_msg
end

Embeddings Generation

SmartPrompt.define_worker :text_embedder do
  use "SiliconFlow"
  model "BAAI/bge-m3"
  prompt params[:text]  
  embeddings(params[:dimensions] || 1024)
end

# Usage
embeddings = engine.call_worker(:text_embedder, {
  text: "Convert this text to embeddings",
  dimensions: 1024
})

πŸ—οΈ Architecture Overview

SmartPrompt follows a modular architecture:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Application   β”‚    β”‚   SmartPrompt    β”‚    β”‚   LLM Provider  β”‚
β”‚                 │◄──►│     Engine       │◄──►│   (OpenAI/      β”‚
β”‚                 β”‚    β”‚                  β”‚    β”‚    Llama.cpp)   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                β”‚
                       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
                       β”‚        β”‚        β”‚
                   β”Œβ”€β”€β”€β–Όβ”€β”€β”€β” β”Œβ”€β”€β–Όβ”€β”€β” β”Œβ”€β”€β”€β–Όβ”€β”€β”€β”€β”
                   β”‚Workersβ”‚ β”‚Conv.β”‚ β”‚Templateβ”‚
                   β”‚       β”‚ β”‚Mgmt β”‚ β”‚ System β”‚
                   β””β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Core Components

  • Engine: Central orchestrator managing configuration, adapters, and workers
  • Workers: Reusable task definitions with embedded business logic
  • Conversation: Context and message history management
  • Adapters: LLM provider integrations (OpenAI, Llama.cpp, etc.)
  • Templates: ERB-based prompt template system

πŸ”§ Configuration Reference

Adapter Configuration

adapters:
  openai: "OpenAIAdapter"      # For OpenAI API

LLM Configuration

llms:
  model_name:
    adapter: "adapter_name"
    api_key: "your_api_key"     # Can use ENV['KEY_NAME']
    url: "https://api.url"
    model: "model_identifier"
    temperature: 0.7
    # Additional provider-specific options

Path Configuration

template_path: "./templates"   # Directory for .erb templates
worker_path: "./workers"       # Directory for worker definitions  
logger_file: "./logs/app.log"  # Log file location

πŸ§ͺ Testing

Run the test suite:

bundle exec rake test

For development, you can use the console:

bundle exec bin/console

🀝 Integration Examples

With Rails Applications

# config/initializers/smart_prompt.rb
class SmartPromptService
  def self.engine
    @engine ||= SmartPrompt::Engine.new(
      Rails.root.join('config', 'smart_prompt.yml')
    )
  end
  
  def self.chat(message, context: nil)
    engine.call_worker(:chat_assistant, {
      question: message,
      context: context
    })
  end
end

# In your controller
class ChatController < ApplicationController
  def create
    response = SmartPromptService.chat(
      params[:message],
      context: session[:conversation_context]
    )
    
    render json: { response: response }
  end
end

With Sidekiq Background Jobs

class LLMProcessingJob < ApplicationJob
  def perform(task_type, parameters)
    engine = SmartPrompt::Engine.new('config/smart_prompt.yml')
    result = engine.call_worker(task_type.to_sym, parameters)
    
    # Process result...
    NotificationService.send_completion(result)
  end
end

πŸš€ Real-world Use Cases

  • Chatbots and Conversational AI: Build sophisticated chatbots with context awareness
  • Content Generation: Automated content creation with template-driven prompts
  • Code Analysis: AI-powered code review and documentation generation
  • Customer Support: Intelligent ticket routing and response suggestions
  • Data Processing: LLM-powered data extraction and transformation
  • Educational Tools: AI tutors and learning assistance systems

πŸ›£οΈ Roadmap

  • Additional LLM provider adapters (Anthropic Claude, Google PaLM)
  • Visual prompt builder and management interface
  • Enhanced caching and performance optimizations
  • Integration with vector databases for RAG applications
  • Built-in evaluation and testing framework for prompts
  • Distributed worker execution support

🀝 Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -am 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE.txt file for details.

πŸ™ Acknowledgments

  • Built with ❀️ by the SmartPrompt team
  • Inspired by the need for elegant LLM integration in Ruby applications
  • Thanks to all contributors and the Ruby community

πŸ“ž Support


SmartPrompt - Making LLM integration in Ruby applications simple, powerful, and elegant.