Project

llm-spell

0.0
No release in over 3 years
llm-spell is a command-line utility that can correct spelling mistakes with the help of a Large Language Model (LLM). Compared to traditional spell checkers like `aspell` and `hunspell`, llm-spell often produces fewer false positives and more accurate suggestions.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Dependencies

Development

~> 2.8
~> 2.4
~> 13.0
~> 3.0
~> 1.40
~> 0.12.0
~> 6.0
~> 3.24.0
~> 1.8
~> 0.9.37

Runtime

~> 0.14
 Project Readme

About

llm-spell is both a library and command-line utility that corrects spelling mistakes using a Large Language Model (LLM). It is powered by llm.rb.

Features

  • LLM-powered corrections – smarter spelling fixes than traditional tools
  • 🤖 Fewer false positives – avoids flagging uncommon but valid words
  • 🌐 Broad provider support – use OpenAI, Gemini, or xAI (Grok) out of the box
  • 💻 Offline ready – run locally with Ollama and LlamaCpp, no cloud required
  • 🔒 Privacy – keep sensitive text local with offline models
  • 🛠️ Easy to use – provides an easy to use library and command-line utility

Library

#!/usr/bin/env ruby
require "llm"
require "llm/spell"

##
# Text
llm  = LLM.openai(key: ENV["OPENAI_SECRET"])
text = LLM::Spell::Text.new("Ths is a smple txt with sme speling erors.", llm)
print "mistakes: ", text.mistakes, "\n"
print "corrections: ", text.corrections, "\n"

##
# PDF
llm  = LLM.openai(key: ENV["OPENAI_SECRET"])
file = LLM::Spell::Document.new("typos.pdf", llm)
print "mistakes: ", file.mistakes, "\n"
print "corrections: ", file.corrections, "\n"

CLI

Configuration

The command line interface can be configured through the configuration file located at $XDG_CONFIG_HOME/llm-spell.yml or ~/.config/llm-spell.yml. This step is required to use llm-spell from the command line but it is not required when using llm-shell as a library:

# ~/.config/llm-spell.yml
openai:
  key: YOURKEY
gemini:
  key: YOURKEY
xai:
  key: YOURKEY
ollama:
  host: localhost
  model: gpt-oss
llamacpp:
  host: localhost

Usage

Usage: llm-spell [OPTIONS]
    -p, --provider NAME              Required. Options: gemini, openai, xai, ollama or llamacpp.
    -f, --file FILE                  Required. The file to check.
    -v, --version                    Optional. Print the version and exit.

Demo

Start demo Demo of llm-spell in action

Install

llm-spell requires Ruby 3.2+ and can be installed via RubyGems:

gem install llm-spell

Background

This project was born while I was working on the documentation for a friend's open source project. After realizing how much manual effort was involved with traditional spell checkers I decided to see if I could leverage LLMs to make the process easier and also faster.

Compared to traditional spell checkers like aspell and hunspell, llm-spell provides significantly more accurate suggestions with far fewer false positives – eliminating the need to manually ignore irrelevant corrections, and that often reduces the overall time spent on correcting spelling mistakes.

I would call the experiment a success but I also realize this approach is not for everyone, or every situation. For example, my friend preferred to not use AI for this and instead we opted to stick with hunspell – even though it meant more manual work.

License

BSD Zero Clause
See LICENSE