๐ GRNexus v1.0
The Ultimate Cross-Language Neural Network Framework
Train in Ruby. Deploy in Python. Or vice versa. Your choice.
๐ What Makes GRNexus Special?
GRNexus is not just another neural network framework. It's a revolutionary cross-language AI platform that breaks the barriers between Ruby and Python, combining the elegance of high-level languages with the raw power of native C acceleration.
๐ The Magic Trinity
โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ
โ Ruby โ โโโโ โ .nexus โ โโโโ โ Python โ
โ Elegance โ โ Format โ โ Power โ
โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ
โ โ โ
โโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโผโโโโโโโโโ
โ Native C Core โ
โ 10-100x Fasterโ
โโโโโโโโโโโโโโโโโโ
โก Superpowers Unlocked
| Feature | Description | Status |
|---|---|---|
| ๐ Blazing Fast | Native C implementation (10-100x faster) | โ Production Ready |
| ๐ Cross-Language | Ruby โ Python model compatibility | โ 100% Compatible |
| ๐ Text AI | Complete NLP pipeline (tokenization, embeddings, TF-IDF) | โ Full Suite |
| ๐ข Numeric Ops | 40+ operations (stats, normalization, time series) | โ Comprehensive |
| ๐ฏ 35+ Activations | GELU, Swish, Mish, Snake, and more | โ State-of-the-art |
| ๐๏ธ 12+ Layers | Dense, Conv2D, LSTM, GRU, BatchNorm, Dropout | โ Production Grade |
| ๐ Smart Training | EarlyStopping, ModelCheckpoint, ReduceLR | โ Intelligent |
| ๐ Model Inspector | Analyze models without loading | โ Unique Feature |
| ๐ Cross-Platform | Windows, macOS, Linux | โ Universal |
| ๐ฆ Zero Dependencies | Pure Ruby/Python + C (no TensorFlow/PyTorch) | โ Lightweight |
๐ What's New in v1.0
โจ Major Features
-
Cross-Language Model Compatibility
- Save models in Ruby, load in Python (and vice versa)
- Universal
.nexusformat with metadata - Automatic architecture reconstruction
- BatchNorm statistics preserved correctly
-
Complete Text Processing
- Vocabulary management
- TF-IDF vectorization
- Word embeddings with Xavier initialization
- Document similarity
- Sentiment analysis ready
- Improved EmbeddingLayer for NLP tasks
-
Advanced Numeric Processing
- Statistical operations (mean, std, variance)
- Normalization (Z-score, MinMax)
- Time series (moving average, differences, integration)
- Array operations (concatenate, power, modulo)
-
Model Inspection
- Analyze models without loading
- View architecture, parameters, training history
- Cross-language metadata
-
Smart Training
- Intelligent callbacks
- Automatic learning rate adjustment
- Early stopping
- Best model checkpointing
-
Enhanced Layer Support
- FlattenLayer now handles 3D tensors (batch ร sequence ร features)
- EmbeddingLayer with Xavier initialization
- Better text and sequence processing
- Full support for NLP architectures
๐ Quick Start
Installation
# Clone the repository
git clone https://github.com/grcodedigitalsolutions/GRNexus.git
cd GRNexus
# That's it! No dependencies to install ๐Run All Tests
# Windows
windows_run.bat
# macOS
chmod +x mac.sh && ./mac.sh
# Linux
chmod +x linux.sh && ./linux.sh30-Second Example: XOR Problem
Ruby:
require_relative 'ruby/grnexus'
# XOR dataset
x_train = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
y_train = [[0.0], [1.0], [1.0], [0.0]]
# Build model
model = GRNexus::NeuralNetwork.new(loss: 'mse', learning_rate: 1.0)
model.add(GRNEXUSLayer::DenseLayer.new(units: 8, input_dim: 2, activation: GRNEXUSActivations::Tanh.new))
model.add(GRNEXUSLayer::DenseLayer.new(units: 1, input_dim: 8, activation: GRNEXUSActivations::Sigmoid.new))
# Train
model.train(x_train, y_train, epochs: 5000, batch_size: 4, verbose: false)
# Save (works in Python too!)
model.save('xor_model.nexus')
# Predict
puts model.predict([[0.0, 0.0]])[0][0].round(2) # => ~0.0
puts model.predict([[1.0, 1.0]])[0][0].round(2) # => ~0.0
puts model.predict([[0.0, 1.0]])[0][0].round(2) # => ~1.0
puts model.predict([[1.0, 0.0]])[0][0].round(2) # => ~1.0Python:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer
from lib.grnexus_activations import Tanh, Sigmoid
# XOR dataset
x_train = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
y_train = [[0.0], [1.0], [1.0], [0.0]]
# Build model
model = NeuralNetwork(loss='mse', learning_rate=1.0)
model.add(DenseLayer(units=8, input_dim=2, activation=Tanh()))
model.add(DenseLayer(units=1, input_dim=8, activation=Sigmoid()))
# Train
model.train(x_train, y_train, epochs=5000, batch_size=4, verbose=False)
# Save (works in Ruby too!)
model.save('xor_model.nexus')
# Predict
print(f"{model.predict([[0.0, 0.0]])[0][0]:.2f}") # => ~0.0
print(f"{model.predict([[1.0, 1.0]])[0][0]:.2f}") # => ~0.0
print(f"{model.predict([[0.0, 1.0]])[0][0]:.2f}") # => ~1.0
print(f"{model.predict([[1.0, 0.0]])[0][0]:.2f}") # => ~1.0Real-World Example: Sentiment Analysis (Complete)
Python - Simple Sentiment Analysis:
from grnexus import NeuralNetwork
from lib.grnexus_text_proccessing import Vocabulary, TextVectorizer
from lib.grnexus_layers import DenseLayer, DropoutLayer
from lib.grnexus_activations import ReLU, Tanh
from lib.grnexus_normalization import Softmax
# Training data
texts = [
"I love this product it's excellent",
"terrible product very bad quality",
"amazing quality exceeded expectations",
"worst purchase ever disappointed",
"highly recommend great value",
"waste of money poor quality"
]
labels = [[1, 0], [0, 1], [1, 0], [0, 1], [1, 0], [0, 1]] # [positive, negative]
# Create vocabulary and vectorize
vocab = Vocabulary(texts, max_vocab_size=100)
vectorizer = TextVectorizer(vocab)
x_train = [vectorizer.vectorize(text) for text in texts]
# Build sentiment analyzer
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.05, name='sentiment_analyzer')
model.add(DenseLayer(32, vocab.size, activation=ReLU()))
model.add(DropoutLayer(rate=0.3))
model.add(DenseLayer(16, 32, activation=Tanh()))
model.add(DenseLayer(2, 16, activation=Softmax()))
# Train
model.train(x_train, labels, epochs=100, batch_size=2, verbose=True)
# Test predictions
test_text = "excellent product very good"
test_vector = vectorizer.vectorize(test_text)
prediction = model.predict([test_vector])[0]
sentiment = "POSITIVE" if prediction[0] > prediction[1] else "NEGATIVE"
confidence = max(prediction) * 100
print(f"Text: '{test_text}'")
print(f"Sentiment: {sentiment} ({confidence:.2f}% confidence)")
# Save for Ruby
model.save('models/sentiment_analyzer.nexus')Ruby - Same Sentiment Analysis:
require_relative 'ruby/grnexus'
# Training data
texts = [
"I love this product it's excellent",
"terrible product very bad quality",
"amazing quality exceeded expectations",
"worst purchase ever disappointed",
"highly recommend great value",
"waste of money poor quality"
]
labels = [[1, 0], [0, 1], [1, 0], [0, 1], [1, 0], [0, 1]] # [positive, negative]
# Create vocabulary and vectorize
vocab = GRNexusTextProcessing::Vocabulary.new(texts, max_vocab_size: 100)
vectorizer = GRNexusTextProcessing::TextVectorizer.new(vocab)
x_train = texts.map { |text| vectorizer.vectorize(text) }
# Build sentiment analyzer
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.05, name: 'sentiment_analyzer')
model.add(GRNEXUSLayer::DenseLayer.new(units: 32, input_dim: vocab.size, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.3))
model.add(GRNEXUSLayer::DenseLayer.new(units: 16, input_dim: 32, activation: GRNEXUSActivations::Tanh.new))
model.add(GRNEXUSLayer::DenseLayer.new(units: 2, input_dim: 16, activation: GRNEXUSNormalization::Softmax.new))
# Train
model.train(x_train, labels, epochs: 100, batch_size: 2, verbose: true)
# Test predictions
test_text = "excellent product very good"
test_vector = vectorizer.vectorize(test_text)
prediction = model.predict([test_vector])[0]
sentiment = prediction[0] > prediction[1] ? "POSITIVE" : "NEGATIVE"
confidence = prediction.max * 100
puts "Text: '#{test_text}'"
puts "Sentiment: #{sentiment} (#{confidence.round(2)}% confidence)"
# Save for Python
model.save('models/sentiment_analyzer.nexus')Advanced: Sentiment Analysis with Embeddings:
from grnexus import NeuralNetwork
from lib.grnexus_text_proccessing import Vocabulary, TextEmbeddings
from lib.grnexus_layers import EmbeddingLayer, DenseLayer, DropoutLayer, FlattenLayer
from lib.grnexus_activations import ReLU
from lib.grnexus_normalization import Softmax
# Larger dataset
texts = [
"This movie is absolutely fantastic and amazing",
"Terrible film waste of time and money",
"Great acting superb storyline loved it",
"Boring predictable disappointing experience",
# ... more training data
]
labels = [[1, 0], [0, 1], [1, 0], [0, 1]] # [positive, negative]
# Create vocabulary
vocab = Vocabulary(texts, max_vocab_size=5000)
# Normalize texts to sequences of indices
max_length = 20
x_train = [vocab.normalize_text(text, max_length=max_length) for text in texts]
# Build model with embedding layer
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.001, name='sentiment_embeddings')
# Embedding layer converts word indices to dense vectors
model.add(EmbeddingLayer(
vocab_size=vocab.size,
embedding_dim=128,
input_length=max_length
))
# Flatten embeddings
model.add(FlattenLayer()) # Output: max_length * embedding_dim
# Dense layers
model.add(DenseLayer(64, max_length * 128, activation=ReLU()))
model.add(DropoutLayer(rate=0.5))
model.add(DenseLayer(32, 64, activation=ReLU()))
model.add(DenseLayer(2, 32, activation=Softmax()))
# Train
model.train(x_train, labels, epochs=50, batch_size=16, verbose=True)
# Predict
test_text = "amazing movie highly recommended"
test_seq = vocab.normalize_text(test_text, max_length=max_length)
prediction = model.predict([test_seq])[0]
print(f"Sentiment: {'POSITIVE' if prediction[0] > prediction[1] else 'NEGATIVE'}")
print(f"Confidence: {max(prediction)*100:.2f}%")
model.save('sentiment_embeddings.nexus')Ruby - Sentiment with Embeddings:
require_relative 'ruby/grnexus'
# Larger dataset
texts = [
"This movie is absolutely fantastic and amazing",
"Terrible film waste of time and money",
"Great acting superb storyline loved it",
"Boring predictable disappointing experience"
]
labels = [[1, 0], [0, 1], [1, 0], [0, 1]] # [positive, negative]
# Create vocabulary
vocab = GRNexusTextProcessing::Vocabulary.new(texts, max_vocab_size: 5000)
# Normalize texts to sequences of indices
max_length = 20
x_train = texts.map { |text| vocab.normalize_text(text, max_length: max_length) }
# Build model with embedding layer
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001, name: 'sentiment_embeddings')
# Embedding layer converts word indices to dense vectors
model.add(GRNEXUSLayer::EmbeddingLayer.new(
vocab_size: vocab.size,
embedding_dim: 128,
input_length: max_length
))
# Flatten embeddings
model.add(GRNEXUSLayer::FlattenLayer.new) # Output: max_length * embedding_dim
# Dense layers
model.add(GRNEXUSLayer::DenseLayer.new(units: 64, input_dim: max_length * 128, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.5))
model.add(GRNEXUSLayer::DenseLayer.new(units: 32, input_dim: 64, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DenseLayer.new(units: 2, input_dim: 32, activation: GRNEXUSNormalization::Softmax.new))
# Train
model.train(x_train, labels, epochs: 50, batch_size: 16, verbose: true)
# Predict
test_text = "amazing movie highly recommended"
test_seq = vocab.normalize_text(test_text, max_length: max_length)
prediction = model.predict([test_seq])[0]
sentiment = prediction[0] > prediction[1] ? "POSITIVE" : "NEGATIVE"
puts "Sentiment: #{sentiment}"
puts "Confidence: #{(prediction.max * 100).round(2)}%"
model.save('sentiment_embeddings.nexus')Load and use cross-language:
# Load Python model in Ruby
model = GRNexus::NeuralNetwork.load('sentiment_embeddings.nexus')
# => Loading model: GRNexus v1.0 (created in Python)
# Use it immediately!
prediction = model.predict(test_data)
puts "Sentiment: #{prediction[0] > prediction[1] ? 'POSITIVE' : 'NEGATIVE'}"# Load Ruby model in Python
model = NeuralNetwork.load('sentiment_embeddings.nexus')
# => Loading model: GRNexus v1.0 (created in Ruby)
# Use it immediately!
prediction = model.predict(test_data)
print(f"Sentiment: {'POSITIVE' if prediction[0] > prediction[1] else 'NEGATIVE'}")๐ The Cross-Language Magic
This is where GRNexus truly shines. Train in one language, deploy in another:
# Team A: Ruby developers train a model
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.1)
model.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 50, activation: GRNEXUSActivations::GELU.new))
model.add(GRNEXUSLayer::BatchNormLayer.new)
model.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 128, activation: GRNEXUSNormalization::Softmax.new))
model.train(x_train, y_train, epochs: 50)
model.save('shared_model.nexus')# Team B: Python developers use it
model = NeuralNetwork.load('shared_model.nexus')
# => Loading model: GRNexus v1.0 (created in Ruby)
# Total params: 6,538
# Layers: 3
# Continue training with new data
model.train(new_x, new_y, epochs=20)
# Deploy in production
predictions = model.predict(production_data)Supported paths:
- โ Ruby โ Python
- โ Python โ Ruby
- โ Ruby โ Ruby (obviously)
- โ Python โ Python (obviously)
- โ
Relative paths:
../models/model.nexus - โ
Absolute paths:
/home/user/models/model.nexus - โ
Windows paths:
C:\Models\model.nexus
๐ Advanced Examples
1. Complete Layer Usage Examples
Example 1: Basic Dense Network with Dropout
Python:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer, DropoutLayer
from lib.grnexus_activations import ReLU, Sigmoid
from lib.grnexus_normalization import Softmax
# Create simple network with dropout
model = NeuralNetwork(learning_rate=0.1)
# Hidden layer with ReLU
model.add(DenseLayer(units=4, input_dim=2, activation=ReLU()))
# Output layer with Sigmoid
model.add(DenseLayer(units=1, input_dim=4, activation=Sigmoid()))
# XOR problem
x_train = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
y_train = [[0.0], [1.0], [1.0], [0.0]]
# Train
model.compile(loss='mse')
model.train(x_train, y_train, epochs=100, batch_size=4, verbose=False)
# Predict
predictions = model.predict(x_train)
for i, x in enumerate(x_train):
print(f"Input: {x} -> Predicted: {predictions[i][0]:.2f}, Target: {y_train[i][0]}")Ruby:
require_relative 'ruby/grnexus'
# Create simple network with dropout
model = GRNexus::NeuralNetwork.new(learning_rate: 0.1)
# Hidden layer with ReLU
model.add(GRNEXUSLayer::DenseLayer.new(units: 4, input_dim: 2, activation: GRNEXUSActivations::ReLU.new))
# Output layer with Sigmoid
model.add(GRNEXUSLayer::DenseLayer.new(units: 1, input_dim: 4, activation: GRNEXUSActivations::Sigmoid.new))
# XOR problem
x_train = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
y_train = [[0.0], [1.0], [1.0], [0.0]]
# Train
model.compile(loss: 'mse')
model.train(x_train, y_train, epochs: 100, batch_size: 4, verbose: false)
# Predict
predictions = model.predict(x_train)
x_train.each_with_index do |x, i|
puts "Input: #{x} -> Predicted: #{predictions[i][0].round(2)}, Target: #{y_train[i][0]}"
endExample 2: Network with Dropout Regularization
Python:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer, DropoutLayer
from lib.grnexus_activations import ReLU
from lib.grnexus_normalization import Softmax
# Binary classification with dropout
model = NeuralNetwork(learning_rate=0.01)
model.add(DenseLayer(units=8, input_dim=3, activation=ReLU()))
model.add(DropoutLayer(rate=0.3)) # Drop 30% during training
model.add(DenseLayer(units=4, input_dim=8, activation=ReLU()))
model.add(DropoutLayer(rate=0.2)) # Drop 20% during training
model.add(DenseLayer(units=2, input_dim=4, activation=Softmax()))
# Training data
x_train = [
[1.0, 2.0, 3.0], [1.5, 2.5, 3.5], [1.2, 2.2, 3.2],
[5.0, 6.0, 7.0], [5.5, 6.5, 7.5], [5.2, 6.2, 7.2]
]
y_train = [
[1.0, 0.0], [1.0, 0.0], [1.0, 0.0],
[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]
]
model.compile(loss='cross_entropy')
model.train(x_train, y_train, epochs=50, batch_size=2, verbose=False)
# Predict (dropout automatically disabled during inference)
predictions = model.predict(x_train)
for i, pred in enumerate(predictions):
pred_class = pred.index(max(pred))
target_class = y_train[i].index(max(y_train[i]))
print(f"Sample {i + 1}: Predicted class {pred_class}, Target class {target_class}")Ruby:
require_relative 'ruby/grnexus'
# Binary classification with dropout
model = GRNexus::NeuralNetwork.new(learning_rate: 0.01)
model.add(GRNEXUSLayer::DenseLayer.new(units: 8, input_dim: 3, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.3)) # Drop 30% during training
model.add(GRNEXUSLayer::DenseLayer.new(units: 4, input_dim: 8, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.2)) # Drop 20% during training
model.add(GRNEXUSLayer::DenseLayer.new(units: 2, input_dim: 4, activation: GRNEXUSNormalization::Softmax.new))
# Training data
x_train = [
[1.0, 2.0, 3.0], [1.5, 2.5, 3.5], [1.2, 2.2, 3.2],
[5.0, 6.0, 7.0], [5.5, 6.5, 7.5], [5.2, 6.2, 7.2]
]
y_train = [
[1.0, 0.0], [1.0, 0.0], [1.0, 0.0],
[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]
]
model.compile(loss: 'cross_entropy')
model.train(x_train, y_train, epochs: 50, batch_size: 2, verbose: false)
# Predict (dropout automatically disabled during inference)
predictions = model.predict(x_train)
predictions.each_with_index do |pred, i|
pred_class = pred.index(pred.max)
target_class = y_train[i].index(y_train[i].max)
puts "Sample #{i + 1}: Predicted class #{pred_class}, Target class #{target_class}"
endExample 3: Batch Normalization for Stable Training
Python:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer, BatchNormLayer, ActivationLayer
from lib.grnexus_activations import ReLU
from lib.grnexus_normalization import Softmax
# Multi-class classification with BatchNorm
model = NeuralNetwork(learning_rate=0.01)
# Layer 1: Dense + BatchNorm + Activation
model.add(DenseLayer(units=16, input_dim=4, activation=None)) # No activation yet
model.add(BatchNormLayer(epsilon=1e-5, momentum=0.1))
model.add(ActivationLayer(ReLU()))
# Layer 2: Dense + BatchNorm + Activation
model.add(DenseLayer(units=8, input_dim=16, activation=None))
model.add(BatchNormLayer(epsilon=1e-5, momentum=0.1))
model.add(ActivationLayer(ReLU()))
# Output layer
model.add(DenseLayer(units=3, input_dim=8, activation=Softmax()))
# Multi-class data (3 classes)
x_train = [
[1.0, 1.0, 1.0, 1.0], [1.2, 1.1, 1.3, 1.0],
[3.0, 3.0, 3.0, 3.0], [3.2, 3.1, 3.3, 3.0],
[5.0, 5.0, 5.0, 5.0], [5.2, 5.1, 5.3, 5.0]
]
y_train = [
[1.0, 0.0, 0.0], [1.0, 0.0, 0.0],
[0.0, 1.0, 0.0], [0.0, 1.0, 0.0],
[0.0, 0.0, 1.0], [0.0, 0.0, 1.0]
]
model.compile(loss='cross_entropy')
model.train(x_train, y_train, epochs=100, batch_size=2, verbose=False)
# Test predictions
predictions = model.predict(x_train)
correct = 0
for i, pred in enumerate(predictions):
pred_class = pred.index(max(pred))
target_class = y_train[i].index(max(y_train[i]))
if pred_class == target_class:
correct += 1
print(f"Sample {i + 1}: Predicted class {pred_class}, Target class {target_class}")
accuracy = (correct / len(y_train) * 100)
print(f"\nAccuracy: {accuracy:.2f}%")Ruby:
require_relative 'ruby/grnexus'
# Multi-class classification with BatchNorm
model = GRNexus::NeuralNetwork.new(learning_rate: 0.01)
# Layer 1: Dense + BatchNorm + Activation
model.add(GRNEXUSLayer::DenseLayer.new(units: 16, input_dim: 4, activation: nil)) # No activation yet
model.add(GRNEXUSLayer::BatchNormLayer.new(epsilon: 1e-5, momentum: 0.1))
model.add(GRNEXUSLayer::ActivationLayer.new(GRNEXUSActivations::ReLU.new))
# Layer 2: Dense + BatchNorm + Activation
model.add(GRNEXUSLayer::DenseLayer.new(units: 8, input_dim: 16, activation: nil))
model.add(GRNEXUSLayer::BatchNormLayer.new(epsilon: 1e-5, momentum: 0.1))
model.add(GRNEXUSLayer::ActivationLayer.new(GRNEXUSActivations::ReLU.new))
# Output layer
model.add(GRNEXUSLayer::DenseLayer.new(units: 3, input_dim: 8, activation: GRNEXUSNormalization::Softmax.new))
# Multi-class data (3 classes)
x_train = [
[1.0, 1.0, 1.0, 1.0], [1.2, 1.1, 1.3, 1.0],
[3.0, 3.0, 3.0, 3.0], [3.2, 3.1, 3.3, 3.0],
[5.0, 5.0, 5.0, 5.0], [5.2, 5.1, 5.3, 5.0]
]
y_train = [
[1.0, 0.0, 0.0], [1.0, 0.0, 0.0],
[0.0, 1.0, 0.0], [0.0, 1.0, 0.0],
[0.0, 0.0, 1.0], [0.0, 0.0, 1.0]
]
model.compile(loss: 'cross_entropy')
model.train(x_train, y_train, epochs: 100, batch_size: 2, verbose: false)
# Test predictions
predictions = model.predict(x_train)
correct = 0
predictions.each_with_index do |pred, i|
pred_class = pred.index(pred.max)
target_class = y_train[i].index(y_train[i].max)
correct += 1 if pred_class == target_class
puts "Sample #{i + 1}: Predicted class #{pred_class}, Target class #{target_class}"
end
accuracy = (correct.to_f / y_train.length * 100).round(2)
puts "\nAccuracy: #{accuracy}%"Example 4: Complex Architecture with All Layer Types
Python:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer, BatchNormLayer, DropoutLayer, ActivationLayer
from lib.grnexus_activations import ReLU, Tanh
from lib.grnexus_normalization import Softmax
import random
# Complex network combining all layer types
model = NeuralNetwork(learning_rate=0.01, name='ComplexModel')
# Input layer with BatchNorm and Dropout
model.add(DenseLayer(units=32, input_dim=5, activation=None))
model.add(BatchNormLayer())
model.add(ActivationLayer(ReLU()))
model.add(DropoutLayer(rate=0.3))
# Hidden layer 1
model.add(DenseLayer(units=16, input_dim=32, activation=None))
model.add(BatchNormLayer())
model.add(ActivationLayer(ReLU()))
model.add(DropoutLayer(rate=0.2))
# Hidden layer 2
model.add(DenseLayer(units=8, input_dim=16, activation=Tanh()))
# Output layer
model.add(DenseLayer(units=2, input_dim=8, activation=Softmax()))
# View architecture
print("\nModel Summary:")
model.summary()
# Generate synthetic data
x_train = []
y_train = []
for i in range(20):
if i < 10:
x_train.append([random.uniform(0.0, 2.0) for _ in range(5)])
y_train.append([1.0, 0.0])
else:
x_train.append([random.uniform(5.0, 7.0) for _ in range(5)])
y_train.append([0.0, 1.0])
# Train
model.compile(loss='cross_entropy')
history = model.train(x_train, y_train, epochs=50, batch_size=4, verbose=False)
print(f"\nFinal training loss: {history['loss'][-1]:.4f}")
if history.get('accuracy'):
print(f"Final training accuracy: {history['accuracy'][-1]:.2f}%")
# Save and load model
filepath = 'complex_model.nexus'
model.save(filepath)
print(f"\nโ Model saved to {filepath}")
loaded_model = NeuralNetwork.load(filepath)
print("โ Model loaded successfully")
# Verify predictions match
test_sample = [x_train[0]]
pred_original = model.predict(test_sample)
pred_loaded = loaded_model.predict(test_sample)
print("\nVerifying loaded model predictions match...")
for i, val in enumerate(pred_original[0]):
error = abs(val - pred_loaded[0][i])
assert error < 0.001, "Predictions don't match!"
print("โ Predictions match!")Ruby:
require_relative 'ruby/grnexus'
# Complex network combining all layer types
model = GRNexus::NeuralNetwork.new(learning_rate: 0.01, name: 'ComplexModel')
# Input layer with BatchNorm and Dropout
model.add(GRNEXUSLayer::DenseLayer.new(units: 32, input_dim: 5, activation: nil))
model.add(GRNEXUSLayer::BatchNormLayer.new)
model.add(GRNEXUSLayer::ActivationLayer.new(GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.3))
# Hidden layer 1
model.add(GRNEXUSLayer::DenseLayer.new(units: 16, input_dim: 32, activation: nil))
model.add(GRNEXUSLayer::BatchNormLayer.new)
model.add(GRNEXUSLayer::ActivationLayer.new(GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.2))
# Hidden layer 2
model.add(GRNEXUSLayer::DenseLayer.new(units: 8, input_dim: 16, activation: GRNEXUSActivations::Tanh.new))
# Output layer
model.add(GRNEXUSLayer::DenseLayer.new(units: 2, input_dim: 8, activation: GRNEXUSNormalization::Softmax.new))
# View architecture
puts "\nModel Summary:"
model.summary
# Generate synthetic data
x_train = []
y_train = []
20.times do |i|
if i < 10
x_train << [rand(0.0..2.0), rand(0.0..2.0), rand(0.0..2.0), rand(0.0..2.0), rand(0.0..2.0)]
y_train << [1.0, 0.0]
else
x_train << [rand(5.0..7.0), rand(5.0..7.0), rand(5.0..7.0), rand(5.0..7.0), rand(5.0..7.0)]
y_train << [0.0, 1.0]
end
end
# Train
model.compile(loss: 'cross_entropy')
history = model.train(x_train, y_train, epochs: 50, batch_size: 4, verbose: false)
puts "\nFinal training loss: #{history[:loss].last.round(4)}"
puts "Final training accuracy: #{history[:accuracy].last.round(2)}%" if history[:accuracy].any?
# Save and load model
filepath = 'complex_model.nexus'
model.save(filepath)
puts "\nโ Model saved to #{filepath}"
loaded_model = GRNexus::NeuralNetwork.load(filepath)
puts "โ Model loaded successfully"
# Verify predictions match
test_sample = [x_train[0]]
pred_original = model.predict(test_sample)
pred_loaded = loaded_model.predict(test_sample)
puts "\nVerifying loaded model predictions match..."
pred_original[0].each_with_index do |val, i|
error = (val - pred_loaded[0][i]).abs
raise "Predictions don't match!" if error > 0.001
end
puts "โ Predictions match!"2. Deep Network with Modern Activations
Python:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer, BatchNormLayer, DropoutLayer
from lib.grnexus_activations import GELU, Swish, Mish, SELU
from lib.grnexus_normalization import Softmax
# Create a state-of-the-art deep network
model = NeuralNetwork(
loss='mse',
optimizer='adam',
learning_rate=0.001,
name='deep_network'
)
# Layer 1: GELU activation (used in GPT, BERT)
model.add(DenseLayer(units=128, input_dim=20, activation=GELU()))
model.add(BatchNormLayer())
# Layer 2: Swish activation (Google's discovery)
model.add(DenseLayer(units=96, input_dim=128, activation=Swish()))
model.add(DropoutLayer(rate=0.2))
# Layer 3: Mish activation (state-of-the-art)
model.add(DenseLayer(units=64, input_dim=96, activation=Mish()))
model.add(BatchNormLayer())
# Layer 4: SELU (self-normalizing)
model.add(DenseLayer(units=32, input_dim=64, activation=SELU()))
# Output layer
model.add(DenseLayer(units=5, input_dim=32, activation=Softmax()))
# View architecture
model.summary()
# Train
history = model.train(x_train, y_train, epochs=50, batch_size=32, verbose=True)
# Save
model.save('models/deep_network.nexus')Ruby - Same Deep Network:
require_relative 'ruby/grnexus'
# Create a state-of-the-art deep network
model = GRNexus::NeuralNetwork.new(
loss: 'mse',
optimizer: 'adam',
learning_rate: 0.001,
name: 'deep_network'
)
# Layer 1: GELU activation (used in GPT, BERT)
model.add(GRNEXUSLayer::DenseLayer.new(
units: 128,
input_dim: 20,
activation: GRNEXUSActivations::GELU.new
))
model.add(GRNEXUSLayer::BatchNormLayer.new)
# Layer 2: Swish activation (Google's discovery)
model.add(GRNEXUSLayer::DenseLayer.new(
units: 96,
input_dim: 128,
activation: GRNEXUSActivations::Swish.new
))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.2))
# Layer 3: Mish activation (state-of-the-art)
model.add(GRNEXUSLayer::DenseLayer.new(
units: 64,
input_dim: 96,
activation: GRNEXUSActivations::Mish.new
))
model.add(GRNEXUSLayer::BatchNormLayer.new)
# Layer 4: SELU (self-normalizing)
model.add(GRNEXUSLayer::DenseLayer.new(
units: 32,
input_dim: 64,
activation: GRNEXUSActivations::SELU.new
))
# Output layer
model.add(GRNEXUSLayer::DenseLayer.new(
units: 5,
input_dim: 32,
activation: GRNEXUSActivations::Linear.new
))
# View architecture
model.summary
# ================================================================================
# Model: deep_network
# ================================================================================
# Output Shape Param #
# --------------------------------------------------------------------------------
# DenseLayer (GELU) (1) (None, 128) 2688
# BatchNormLayer (2) (None, 128) 2
# DenseLayer (Swish) (3) (None, 96) 12384
# DropoutLayer (4) (None, 96) 0
# DenseLayer (Mish) (5) (None, 64) 6208
# BatchNormLayer (6) (None, 64) 2
# DenseLayer (SELU) (7) (None, 32) 2080
# DenseLayer (Linear) (8) (None, 5) 165
# ================================================================================
# Total params: 23,529
# Trainable params: 23,529
# Non-trainable params: 0
# ================================================================================
# Train
history = model.train(x_train, y_train, epochs: 50, batch_size: 32, verbose: true)
# Save
model.save('models/deep_network.nexus')2. Time Series Prediction
Python - Time Series Forecasting:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer
from lib.grnexus_activations import Tanh, ReLU
from lib.grnexus_numeric_proccessing import MovingAverage, ZScoreNormalize
import math
import random
# Generate time series
time_series = [math.sin(i * 0.05) * 10 + random.random() * 2 for i in range(200)]
# Preprocess
ma = MovingAverage(window_size=5)
smoothed = ma.process(time_series)
zscore = ZScoreNormalize()
normalized = zscore.process(smoothed)
# Create sliding windows
window_size = 10
x_train = []
y_train = []
for i in range(len(normalized) - window_size - 1):
x_train.append(normalized[i:i+window_size])
y_train.append([normalized[i + window_size]])
# Build model
ts_model = NeuralNetwork(loss='mse', learning_rate=0.01)
ts_model.add(DenseLayer(64, window_size, activation=Tanh()))
ts_model.add(DenseLayer(32, 64, activation=ReLU()))
ts_model.add(DenseLayer(1, 32))
ts_model.train(x_train, y_train, epochs=50, batch_size=16)
ts_model.save('time_series_model.nexus')
# Make predictions
future_window = normalized[-window_size:]
prediction = ts_model.predict([future_window])[0]
print(f"Next value prediction: {prediction[0]:.4f}")Ruby - Same Time Series Forecasting:
require_relative 'ruby/grnexus'
# Generate time series
time_series = (0..199).map { |i| Math.sin(i * 0.05) * 10 + rand * 2 }
# Preprocess
ma = GRNEXUSNumericProcessing::MovingAverage.new(window_size: 5)
smoothed = ma.process(time_series)
zscore = GRNEXUSNumericProcessing::ZScoreNormalize.new
normalized = zscore.process(smoothed)
# Create sliding windows
window_size = 10
x_train = []
y_train = []
(0...(normalized.length - window_size - 1)).each do |i|
x_train << normalized[i, window_size]
y_train << [normalized[i + window_size]]
end
# Build model
ts_model = GRNexus::NeuralNetwork.new(loss: 'mse', learning_rate: 0.01)
ts_model.add(GRNEXUSLayer::DenseLayer.new(units: 64, input_dim: window_size, activation: GRNEXUSActivations::Tanh.new))
ts_model.add(GRNEXUSLayer::DenseLayer.new(units: 32, input_dim: 64, activation: GRNEXUSActivations::ReLU.new))
ts_model.add(GRNEXUSLayer::DenseLayer.new(units: 1, input_dim: 32))
ts_model.train(x_train, y_train, epochs: 50, batch_size: 16)
ts_model.save('time_series_model.nexus')
# Make predictions
future_window = normalized[-window_size..-1]
prediction = ts_model.predict([future_window])[0]
puts "Next value prediction: #{prediction[0].round(4)}"3. Convolutional Neural Network (CNN) - Complete Examples
Python - Image Classification with CNN:
from grnexus import NeuralNetwork
from lib.grnexus_layers import *
from lib.grnexus_activations import ReLU
from lib.grnexus_normalization import Softmax
# Build CNN for MNIST-like image classification (28x28 grayscale)
model = NeuralNetwork(
loss='cross_entropy',
optimizer='adam',
learning_rate=0.001,
name='mnist_classifier'
)
# First convolutional block
model.add(Conv2DLayer(
filters=32,
kernel_size=3,
input_shape=(28, 28, 1), # 28x28 grayscale images
activation=ReLU(),
padding='same'
))
model.add(MaxPoolingLayer(pool_size=2, stride=2)) # Output: 14x14x32
# Second convolutional block
model.add(Conv2DLayer(
filters=64,
kernel_size=3,
activation=ReLU(),
padding='same'
))
model.add(MaxPoolingLayer(pool_size=2, stride=2)) # Output: 7x7x64
# Third convolutional block (optional, for deeper networks)
model.add(Conv2DLayer(
filters=128,
kernel_size=3,
activation=ReLU(),
padding='same'
))
# Flatten and dense layers
model.add(FlattenLayer()) # Flatten to 1D: 7x7x128 = 6272
model.add(DenseLayer(
units=256,
input_dim=6272,
activation=ReLU()
))
model.add(DropoutLayer(rate=0.5)) # Regularization
model.add(DenseLayer(
units=10,
input_dim=256,
activation=Softmax() # 10 classes (digits 0-9)
))
# View architecture
model.summary()
# Prepare data (example with random data)
import random
x_train = [[[random.random() for _ in range(28)] for _ in range(28)] for _ in range(1000)]
y_train = []
for _ in range(1000):
label = random.randint(0, 9)
y_train.append([1.0 if i == label else 0.0 for i in range(10)])
# Train
history = model.train(
x_train, y_train,
epochs=20,
batch_size=32,
verbose=True
)
# Save model
model.save('models/mnist_cnn.nexus')
# Evaluate
x_test = [[[random.random() for _ in range(28)] for _ in range(28)] for _ in range(200)]
y_test = []
for _ in range(200):
label = random.randint(0, 9)
y_test.append([1.0 if i == label else 0.0 for i in range(10)])
loss, accuracy = model.evaluate(x_test, y_test)
print(f"Test Accuracy: {accuracy:.2f}%")
# Predict single image
single_image = [[random.random() for _ in range(28)] for _ in range(28)]
prediction = model.predict([single_image])[0]
predicted_digit = prediction.index(max(prediction))
print(f"Predicted digit: {predicted_digit} (confidence: {max(prediction)*100:.2f}%)")Ruby - Same CNN Architecture:
require_relative 'ruby/grnexus'
# Build CNN for MNIST-like image classification (28x28 grayscale)
model = GRNexus::NeuralNetwork.new(
loss: 'cross_entropy',
optimizer: 'adam',
learning_rate: 0.001,
name: 'mnist_classifier'
)
# First convolutional block
model.add(GRNEXUSLayer::Conv2DLayer.new(
filters: 32,
kernel_size: 3,
input_shape: [28, 28, 1], # 28x28 grayscale images
activation: GRNEXUSActivations::ReLU.new,
padding: 'same'
))
model.add(GRNEXUSLayer::MaxPoolingLayer.new(pool_size: 2, stride: 2)) # Output: 14x14x32
# Second convolutional block
model.add(GRNEXUSLayer::Conv2DLayer.new(
filters: 64,
kernel_size: 3,
activation: GRNEXUSActivations::ReLU.new,
padding: 'same'
))
model.add(GRNEXUSLayer::MaxPoolingLayer.new(pool_size: 2, stride: 2)) # Output: 7x7x64
# Third convolutional block (optional, for deeper networks)
model.add(GRNEXUSLayer::Conv2DLayer.new(
filters: 128,
kernel_size: 3,
activation: GRNEXUSActivations::ReLU.new,
padding: 'same'
))
# Flatten and dense layers
model.add(GRNEXUSLayer::FlattenLayer.new) # Flatten to 1D: 7x7x128 = 6272
model.add(GRNEXUSLayer::DenseLayer.new(
units: 256,
input_dim: 6272,
activation: GRNEXUSActivations::ReLU.new
))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.5)) # Regularization
model.add(GRNEXUSLayer::DenseLayer.new(
units: 10,
input_dim: 256,
activation: GRNEXUSNormalization::Softmax.new # 10 classes (digits 0-9)
))
# View architecture
model.summary
# Prepare data (example with random data)
x_train = Array.new(1000) { Array.new(28) { Array.new(28) { rand } } }
y_train = Array.new(1000) do
label = rand(10)
Array.new(10) { |i| i == label ? 1.0 : 0.0 }
end
# Train
history = model.train(
x_train, y_train,
epochs: 20,
batch_size: 32,
verbose: true
)
# Save model (compatible with Python!)
model.save('models/mnist_cnn.nexus')
# Evaluate
x_test = Array.new(200) { Array.new(28) { Array.new(28) { rand } } }
y_test = Array.new(200) do
label = rand(10)
Array.new(10) { |i| i == label ? 1.0 : 0.0 }
end
loss, accuracy = model.evaluate(x_test, y_test)
puts "Test Accuracy: #{accuracy.round(2)}%"
# Predict single image
single_image = Array.new(28) { Array.new(28) { rand } }
prediction = model.predict([single_image])[0]
predicted_digit = prediction.index(prediction.max)
confidence = prediction.max * 100
puts "Predicted digit: #{predicted_digit} (confidence: #{confidence.round(2)}%)"RGB Image Classification (Color Images):
# For RGB images (e.g., 32x32x3 CIFAR-10 style)
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
# Input: 32x32x3 (RGB)
model.add(Conv2DLayer(filters=32, kernel_size=3, input_shape=(32, 32, 3), activation=ReLU()))
model.add(Conv2DLayer(filters=32, kernel_size=3, activation=ReLU()))
model.add(MaxPoolingLayer(pool_size=2))
model.add(DropoutLayer(rate=0.25))
model.add(Conv2DLayer(filters=64, kernel_size=3, activation=ReLU()))
model.add(Conv2DLayer(filters=64, kernel_size=3, activation=ReLU()))
model.add(MaxPoolingLayer(pool_size=2))
model.add(DropoutLayer(rate=0.25))
model.add(FlattenLayer())
model.add(DenseLayer(512, activation=ReLU()))
model.add(DropoutLayer(rate=0.5))
model.add(DenseLayer(10, activation=Softmax()))
# Train on RGB images
model.train(rgb_images, labels, epochs=50, batch_size=64)Ruby - RGB Image Classification:
# For RGB images (e.g., 32x32x3 CIFAR-10 style)
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
# Input: 32x32x3 (RGB)
model.add(GRNEXUSLayer::Conv2DLayer.new(filters: 32, kernel_size: 3, input_shape: [32, 32, 3], activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::Conv2DLayer.new(filters: 32, kernel_size: 3, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::MaxPoolingLayer.new(pool_size: 2))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.25))
model.add(GRNEXUSLayer::Conv2DLayer.new(filters: 64, kernel_size: 3, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::Conv2DLayer.new(filters: 64, kernel_size: 3, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::MaxPoolingLayer.new(pool_size: 2))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.25))
model.add(GRNEXUSLayer::FlattenLayer.new)
model.add(GRNEXUSLayer::DenseLayer.new(units: 512, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.5))
model.add(GRNEXUSLayer::DenseLayer.new(units: 10, activation: GRNEXUSNormalization::Softmax.new))
# Train on RGB images
model.train(rgb_images, labels, epochs: 50, batch_size: 64)MaxPoolingLayer - Detailed Usage:
MaxPoolingLayer reduces spatial dimensions by taking the maximum value in each pooling window.
Python - MaxPooling Examples:
from lib.grnexus_layers import MaxPoolingLayer
# Create pooling layer
pool_layer = MaxPoolingLayer(pool_size=2, stride=2)
# Example 1: Single 2D image (14x14)
single_image = [[random.random() * 10 for _ in range(14)] for _ in range(14)]
pooled_single = pool_layer.forward(single_image)
# Output shape: (7, 7) - reduced by factor of 2
# Example 2: Batch of 2D images (5 images of 14x14)
batch_images = [[[random.random() * 10 for _ in range(14)] for _ in range(14)] for _ in range(5)]
pooled_batch = pool_layer.forward(batch_images)
# Output shape: (5, 7, 7) - batch preserved, spatial dims reduced
# Example 3: After Conv2D layer
# Conv2D output is typically (batch, height, width, channels)
# You can pool each channel separately or reshape as neededRuby - MaxPooling Examples:
require_relative 'ruby/grnexus'
# Create pooling layer
pool_layer = GRNEXUSLayer::MaxPoolingLayer.new(pool_size: 2, stride: 2)
# Example 1: Single 2D image (14x14)
single_image = Array.new(14) { Array.new(14) { rand * 10 } }
pooled_single = pool_layer.forward(single_image)
# Output shape: (7, 7) - reduced by factor of 2
# Example 2: Batch of 2D images (5 images of 14x14)
batch_images = Array.new(5) { Array.new(14) { Array.new(14) { rand * 10 } } }
pooled_batch = pool_layer.forward(batch_images)
# Output shape: (5, 7, 7) - batch preserved, spatial dims reduced
# Example 3: Custom pool size and stride
pool_layer_custom = GRNEXUSLayer::MaxPoolingLayer.new(
pool_size: [3, 3], # 3x3 pooling window
stride: [2, 2] # Stride of 2 in both directions
)Key Points:
- Input must be 2D arrays (images) or 3D arrays (batch of images)
-
pool_sizecan be integer (square) or[height, width] -
stridedefaults topool_sizeif not specified - Output dimensions:
(input_size - pool_size) / stride + 1 - Commonly used after Conv2D layers to reduce spatial dimensions
- Helps with translation invariance and reduces computation
4. Recurrent Neural Networks (RNN/LSTM/GRU)
Python - LSTM for Sequence Classification:
from grnexus import NeuralNetwork
from lib.grnexus_layers import LSTMLayer, FlattenLayer, DenseLayer
from lib.grnexus_normalization import Softmax
# Build LSTM network for sequence classification
model = NeuralNetwork(learning_rate=0.01)
# LSTM layer processes sequences
# Input shape: (batch_size, sequence_length, features)
# For example: [[1.0, 1.0, 1.0], [2.0, 2.0, 2.0], [3.0, 3.0, 3.0], ...]
model.add(LSTMLayer(units=8, input_size=3))
# Flatten LSTM output for dense layer
model.add(FlattenLayer())
# Output layer for classification
model.add(DenseLayer(units=2, input_dim=40, activation=Softmax()))
# Sequence data: [batch_size, sequence_length, features]
# Sequence 1: increasing pattern [1->5] -> class 0
# Sequence 2: decreasing pattern [5->1] -> class 1
x_train = [
[[1.0, 1.0, 1.0], [2.0, 2.0, 2.0], [3.0, 3.0, 3.0], [4.0, 4.0, 4.0], [5.0, 5.0, 5.0]],
[[5.0, 5.0, 5.0], [4.0, 4.0, 4.0], [3.0, 3.0, 3.0], [2.0, 2.0, 2.0], [1.0, 1.0, 1.0]]
]
y_train = [[1.0, 0.0], [0.0, 1.0]]
print("Training LSTM on sequence patterns...")
print(" Sequence 1: Increasing pattern [1->5]")
print(" Sequence 2: Decreasing pattern [5->1]")
model.compile(loss='cross_entropy')
model.train(x_train, y_train, epochs=30, batch_size=2, verbose=False)
# Predict
predictions = model.predict(x_train)
for i, pred in enumerate(predictions):
pred_class = pred.index(max(pred))
target_class = y_train[i].index(max(y_train[i]))
pattern = "Increasing" if i == 0 else "Decreasing"
print(f" {pattern} sequence: Predicted class {pred_class}, Target class {target_class}")
model.save('lstm_model.nexus')Ruby - LSTM for Sequence Classification:
require_relative 'ruby/grnexus'
# Build LSTM network for sequence classification
model = GRNexus::NeuralNetwork.new(learning_rate: 0.01)
# LSTM layer processes sequences
# Input shape: (batch_size, sequence_length, features)
model.add(GRNEXUSLayer::LSTMLayer.new(units: 8, input_size: 3))
# Flatten LSTM output for dense layer
model.add(GRNEXUSLayer::FlattenLayer.new)
# Output layer for classification
model.add(GRNEXUSLayer::DenseLayer.new(units: 2, input_dim: 40, activation: GRNEXUSNormalization::Softmax.new))
# Sequence data: [batch_size, sequence_length, features]
# Sequence 1: increasing pattern [1->5] -> class 0
# Sequence 2: decreasing pattern [5->1] -> class 1
x_train = [
[[1.0, 1.0, 1.0], [2.0, 2.0, 2.0], [3.0, 3.0, 3.0], [4.0, 4.0, 4.0], [5.0, 5.0, 5.0]],
[[5.0, 5.0, 5.0], [4.0, 4.0, 4.0], [3.0, 3.0, 3.0], [2.0, 2.0, 2.0], [1.0, 1.0, 1.0]]
]
y_train = [[1.0, 0.0], [0.0, 1.0]]
puts "Training LSTM on sequence patterns..."
puts " Sequence 1: Increasing pattern [1->5]"
puts " Sequence 2: Decreasing pattern [5->1]"
model.compile(loss: 'cross_entropy')
model.train(x_train, y_train, epochs: 30, batch_size: 2, verbose: false)
# Predict
predictions = model.predict(x_train)
predictions.each_with_index do |pred, i|
pred_class = pred.index(pred.max)
target_class = y_train[i].index(y_train[i].max)
pattern = i == 0 ? "Increasing" : "Decreasing"
puts " #{pattern} sequence: Predicted class #{pred_class}, Target class #{target_class}"
end
model.save('lstm_model.nexus')GRU Alternative (Faster than LSTM):
Python - GRU for Sequence Classification:
from grnexus import NeuralNetwork
from lib.grnexus_layers import GRULayer, FlattenLayer, DenseLayer
from lib.grnexus_normalization import Softmax
# GRU is faster and often performs similarly to LSTM
model = NeuralNetwork(learning_rate=0.01)
# GRU layer processes sequences
model.add(GRULayer(units=6, input_size=2))
# Flatten GRU output
model.add(FlattenLayer())
# Output layer
model.add(DenseLayer(units=2, input_dim=18, activation=Softmax()))
# Sequence data
x_train = [
[[0.0, 0.0], [0.5, 0.5], [1.0, 1.0]],
[[1.0, 1.0], [0.5, 0.5], [0.0, 0.0]]
]
y_train = [[1.0, 0.0], [0.0, 1.0]]
print("Training GRU on sequence patterns...")
model.compile(loss='cross_entropy')
model.train(x_train, y_train, epochs=30, batch_size=2, verbose=False)
predictions = model.predict(x_train)
for i, pred in enumerate(predictions):
pred_class = pred.index(max(pred))
print(f" Sequence {i + 1}: Predicted class {pred_class}")Ruby - GRU for Sequence Classification:
require_relative 'ruby/grnexus'
# GRU is faster and often performs similarly to LSTM
model = GRNexus::NeuralNetwork.new(learning_rate: 0.01)
# GRU layer processes sequences
model.add(GRNEXUSLayer::GRULayer.new(units: 6, input_size: 2))
# Flatten GRU output
model.add(GRNEXUSLayer::FlattenLayer.new)
# Output layer
model.add(GRNEXUSLayer::DenseLayer.new(units: 2, input_dim: 18, activation: GRNEXUSNormalization::Softmax.new))
# Sequence data
x_train = [
[[0.0, 0.0], [0.5, 0.5], [1.0, 1.0]],
[[1.0, 1.0], [0.5, 0.5], [0.0, 0.0]]
]
y_train = [[1.0, 0.0], [0.0, 1.0]]
puts "Training GRU on sequence patterns..."
model.compile(loss: 'cross_entropy')
model.train(x_train, y_train, epochs: 30, batch_size: 2, verbose: false)
predictions = model.predict(x_train)
predictions.each_with_index do |pred, i|
pred_class = pred.index(pred.max)
puts " Sequence #{i + 1}: Predicted class #{pred_class}"
end5. Embedding Layer for Text Processing
Python - Word Embeddings for Text Classification:
from grnexus import NeuralNetwork
from lib.grnexus_layers import EmbeddingLayer, FlattenLayer, DenseLayer
from lib.grnexus_activations import ReLU
from lib.grnexus_normalization import Softmax
# Create network with embedding layer
model = NeuralNetwork(learning_rate=0.01)
# Embedding layer: converts word indices to dense vectors
# vocab_size=10 means we have 10 unique words
# embedding_dim=4 means each word is represented by a 4D vector
model.add(EmbeddingLayer(vocab_size=10, embedding_dim=4))
# Flatten embeddings (input_length * embedding_dim)
model.add(FlattenLayer())
# Dense layers for classification
model.add(DenseLayer(units=8, input_dim=12, activation=ReLU()))
model.add(DenseLayer(units=2, input_dim=8, activation=Softmax()))
# Word indices as input (simulating tokenized text)
# Sequence 1: [1, 2, 3] -> class 0
# Sequence 2: [7, 8, 9] -> class 1
x_train = [
[1.0, 2.0, 3.0],
[7.0, 8.0, 9.0]
]
y_train = [[1.0, 0.0], [0.0, 1.0]]
print("Training with word embeddings...")
print(" Vocab size: 10, Embedding dim: 4")
model.compile(loss='cross_entropy')
model.train(x_train, y_train, epochs=30, batch_size=2, verbose=False)
predictions = model.predict(x_train)
for i, pred in enumerate(predictions):
pred_class = pred.index(max(pred))
print(f" Word sequence {i + 1}: Predicted class {pred_class}")Ruby - Word Embeddings for Text Classification:
require_relative 'ruby/grnexus'
# Create network with embedding layer
model = GRNexus::NeuralNetwork.new(learning_rate: 0.01)
# Embedding layer: converts word indices to dense vectors
# vocab_size=10 means we have 10 unique words
# embedding_dim=4 means each word is represented by a 4D vector
model.add(GRNEXUSLayer::EmbeddingLayer.new(vocab_size: 10, embedding_dim: 4))
# Flatten embeddings (input_length * embedding_dim)
model.add(GRNEXUSLayer::FlattenLayer.new)
# Dense layers for classification
model.add(GRNEXUSLayer::DenseLayer.new(units: 8, input_dim: 12, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DenseLayer.new(units: 2, input_dim: 8, activation: GRNEXUSNormalization::Softmax.new))
# Word indices as input (simulating tokenized text)
# Sequence 1: [1, 2, 3] -> class 0
# Sequence 2: [7, 8, 9] -> class 1
x_train = [
[1.0, 2.0, 3.0],
[7.0, 8.0, 9.0]
]
y_train = [[1.0, 0.0], [0.0, 1.0]]
puts "Training with word embeddings..."
puts " Vocab size: 10, Embedding dim: 4"
model.compile(loss: 'cross_entropy')
model.train(x_train, y_train, epochs: 30, batch_size: 2, verbose: false)
predictions = model.predict(x_train)
predictions.each_with_index do |pred, i|
pred_class = pred.index(pred.max)
puts " Word sequence #{i + 1}: Predicted class #{pred_class}"
end6. Smart Training with Callbacks
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer, BatchNormLayer, DropoutLayer
from lib.grnexus_activations import ReLU
from lib.grnexus_normalization import Softmax
from lib.grnexus_callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
# Build model
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.1, name='smart_model')
model.add(DenseLayer(64, 15, activation=ReLU()))
model.add(BatchNormLayer())
model.add(DropoutLayer(rate=0.3))
model.add(DenseLayer(32, 64, activation=ReLU()))
model.add(DenseLayer(4, 32, activation=Softmax()))
# Configure intelligent callbacks
callbacks = [
# Stop training if validation loss doesn't improve for 5 epochs
EarlyStopping(
monitor='val_loss',
patience=5,
verbose=True,
restore_best_weights=True
),
# Reduce learning rate when validation loss plateaus
ReduceLROnPlateau(
monitor='val_loss',
factor=0.5,
patience=3,
min_lr=0.0001,
verbose=True
),
# Save best model automatically
ModelCheckpoint(
filepath='models/best_model.nexus',
monitor='val_loss',
save_best_only=True,
verbose=True
)
]
# Train with intelligence
history = model.train(
x_train, y_train,
epochs=100,
batch_size=32,
validation_data=(x_val, y_val),
callbacks=callbacks,
verbose=True
)
# Output:
# Epoch 1/100 - Loss: 1.3862 - Accuracy: 25.00% - Val Loss: 1.3521 - Val Accuracy: 30.00%
# Epoch 1: val_loss improved to 1.3521, saving model to models/best_model.nexus
# ...
# Epoch 8: Reducing learning rate from 0.1 to 0.05
# ...
# Epoch 15: Reducing learning rate from 0.05 to 0.025
# ...
# Early stopping triggered at epoch 22
# Restoring best weights from epoch 17
print(f"Best validation loss: {min(history['val_loss'])}")
print(f"Training stopped at epoch: {len(history['loss'])}")6. Classical Machine Learning Algorithms
GRNexus includes 5 classical ML algorithms with the same cross-language compatibility as neural networks. These algorithms are perfect for traditional machine learning tasks and often outperform neural networks on smaller datasets.
๐ฏ When to Use Each Algorithm
| Algorithm | Type | Best For | Dataset Size | Speed | Interpretability |
|---|---|---|---|---|---|
| KNN | Classification | Pattern recognition, recommendation | Small-Medium | Fast | High |
| K-Means | Clustering | Customer segmentation, grouping | Medium-Large | Very Fast | High |
| Linear Regression | Regression | Price prediction, trend analysis | Any | Very Fast | Very High |
| Logistic Regression | Classification | Binary decisions, probability | Medium-Large | Fast | High |
| Naive Bayes | Classification | Text classification, spam detection | Any | Very Fast | High |
Key Features:
- โ Native C implementation (10-50x faster than pure Python/Ruby)
- โ
Save/Load with
.lnexusformat (different from neural networks.nexus) - โ Ruby โ Python compatibility
- โ
Model inspection with
inspect()/__repr__() - โ Production-ready
- โ No training required for KNN (lazy learning)
- โ Probabilistic predictions available
6.1 K-Nearest Neighbors (KNN) - Pattern Recognition
What it does: Classifies new data points based on the majority vote of their K nearest neighbors.
Real-World Use Case: Movie Recommendation System
Python Example:
from grnexus import KNeighborsClassifier
# Movie ratings dataset: [action, comedy, drama, romance, sci-fi]
# Users who like action/sci-fi movies
user_ratings = [
[5, 2, 1, 1, 4], # User 1: Loves action & sci-fi
[5, 1, 2, 1, 5], # User 2: Loves action & sci-fi
[4, 2, 1, 2, 5], # User 3: Loves action & sci-fi
[1, 5, 4, 5, 1], # User 4: Loves comedy & romance
[2, 5, 5, 4, 1], # User 5: Loves comedy & romance
[1, 4, 5, 5, 2], # User 6: Loves comedy & romance
]
# User profiles: 0 = Action/Sci-Fi fan, 1 = Comedy/Romance fan
user_profiles = [0, 0, 0, 1, 1, 1]
# Train KNN recommender
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(user_ratings, user_profiles)
# New user rates movies
new_user = [[4, 3, 2, 2, 4]] # Likes action & sci-fi more
profile = knn.predict(new_user)[0]
print(f"New user ratings: {new_user[0]}")
print(f"Recommended profile: {'Action/Sci-Fi' if profile == 0 else 'Comedy/Romance'}")
print(f"โ Recommend: {'Action movies like Avengers, Star Wars' if profile == 0 else 'Comedies like The Hangover, Bridesmaids'}")
# Save for production
knn.save('movie_recommender.lnexus')
# Model info
print(f"\nModel: {knn}")
# Output: <KNeighborsClassifier n_neighbors=3, status=trained, samples=6, features=5>Ruby Example:
require_relative 'ruby/grnexus'
# Movie ratings dataset: [action, comedy, drama, romance, sci-fi]
user_ratings = [
[5, 2, 1, 1, 4], # User 1: Loves action & sci-fi
[5, 1, 2, 1, 5], # User 2: Loves action & sci-fi
[4, 2, 1, 2, 5], # User 3: Loves action & sci-fi
[1, 5, 4, 5, 1], # User 4: Loves comedy & romance
[2, 5, 5, 4, 1], # User 5: Loves comedy & romance
[1, 4, 5, 5, 2], # User 6: Loves comedy & romance
]
# User profiles: 0 = Action/Sci-Fi fan, 1 = Comedy/Romance fan
user_profiles = [0, 0, 0, 1, 1, 1]
# Train KNN recommender
knn = GRNEXUSMachineLearning::KNeighborsClassifier.new(n_neighbors: 3)
knn.fit(user_ratings, user_profiles)
# New user rates movies
new_user = [[4, 3, 2, 2, 4]] # Likes action & sci-fi more
profile = knn.predict(new_user)[0]
puts "New user ratings: #{new_user[0]}"
puts "Recommended profile: #{profile == 0 ? 'Action/Sci-Fi' : 'Comedy/Romance'}"
puts "โ Recommend: #{profile == 0 ? 'Action movies like Avengers, Star Wars' : 'Comedies like The Hangover, Bridesmaids'}"
# Save for production
knn.save('movie_recommender.lnexus')
# Model info
puts "\nModel: #{knn.inspect}"
# Output: <KNeighborsClassifier n_neighbors=3, status=trained, samples=6, features=5>Why KNN?
- โ No training phase (instant "training")
- โ Naturally handles multi-class problems
- โ Works well with small datasets
- โ Easy to understand and explain
- โ Slow prediction on large datasets
- โ Sensitive to feature scaling
6.2 K-Means Clustering - Customer Segmentation
What it does: Groups similar data points together without labels (unsupervised learning).
Real-World Use Case: E-commerce Customer Segmentation
Python Example:
from grnexus import KMeans
# Customer data: [monthly_spending, purchase_frequency]
customers = [
# Low spenders, low frequency
[50, 2], [60, 3], [55, 2], [45, 1], [52, 2], [48, 1],
# Medium spenders, high frequency
[200, 8], [220, 9], [210, 8], [195, 7], [215, 9], [205, 8],
# High spenders, very high frequency (VIP)
[500, 15], [520, 16], [510, 14], [495, 14], [530, 17], [505, 15]
]
# Segment customers into 3 groups
kmeans = KMeans(n_clusters=3, max_iter=100)
kmeans.fit(customers)
# Get cluster centers (average customer in each segment)
centers = kmeans.centroids
print("Customer Segments Identified:")
for i, center in enumerate(centers):
spending, frequency = center
if spending < 100:
segment = "Occasional Shoppers"
strategy = "Send discount coupons to increase engagement"
elif spending < 300:
segment = "Regular Customers"
strategy = "Loyalty program and exclusive offers"
else:
segment = "VIP Customers"
strategy = "Personal attention and early access to products"
print(f"\n Segment {i + 1}: {segment}")
print(f" Avg Monthly Spending: ${spending:.2f}")
print(f" Avg Purchase Frequency: {frequency:.1f} times/month")
print(f" Marketing Strategy: {strategy}")
# Classify new customers
new_customers = [
[180, 7], # Should be Regular
[550, 18], # Should be VIP
[40, 1] # Should be Occasional
]
segments = kmeans.predict(new_customers)
print("\nNew Customer Classification:")
for i, (customer, segment) in enumerate(zip(new_customers, segments)):
print(f" Customer {i + 1}: Spending=${customer[0]}, Frequency={customer[1]} โ Segment {segment + 1}")
# Save for production
kmeans.save('customer_segmentation.lnexus')
print(f"\nโ Model: {kmeans}")
# Output: <KMeans n_clusters=3, status=trained, features=2>Ruby Example:
require_relative 'ruby/grnexus'
# Customer data: [monthly_spending, purchase_frequency]
customers = [
# Low spenders, low frequency
[50, 2], [60, 3], [55, 2], [45, 1], [52, 2], [48, 1],
# Medium spenders, high frequency
[200, 8], [220, 9], [210, 8], [195, 7], [215, 9], [205, 8],
# High spenders, very high frequency (VIP)
[500, 15], [520, 16], [510, 14], [495, 14], [530, 17], [505, 15]
]
# Segment customers into 3 groups
kmeans = GRNEXUSMachineLearning::KMeans.new(n_clusters: 3, max_iter: 100)
kmeans.fit(customers)
# Get cluster centers (average customer in each segment)
centers = kmeans.centroids
puts "Customer Segments Identified:"
centers.each_with_index do |center, i|
spending, frequency = center
if spending < 100
segment = "Occasional Shoppers"
strategy = "Send discount coupons to increase engagement"
elsif spending < 300
segment = "Regular Customers"
strategy = "Loyalty program and exclusive offers"
else
segment = "VIP Customers"
strategy = "Personal attention and early access to products"
end
puts "\n Segment #{i + 1}: #{segment}"
puts " Avg Monthly Spending: $#{spending.round(2)}"
puts " Avg Purchase Frequency: #{frequency.round(1)} times/month"
puts " Marketing Strategy: #{strategy}"
end
# Classify new customers
new_customers = [
[180, 7], # Should be Regular
[550, 18], # Should be VIP
[40, 1] # Should be Occasional
]
segments = kmeans.predict(new_customers)
puts "\nNew Customer Classification:"
new_customers.each_with_index do |(spending, freq), i|
puts " Customer #{i + 1}: Spending=$#{spending}, Frequency=#{freq} โ Segment #{segments[i] + 1}"
end
# Save for production
kmeans.save('customer_segmentation.lnexus')
puts "\nโ Model: #{kmeans.inspect}"
# Output: <KMeans n_clusters=3, status=trained, features=2>Why K-Means?
- โ Fast and scalable
- โ Works well with large datasets
- โ Easy to interpret results
- โ No labels needed (unsupervised)
- โ Need to specify K (number of clusters)
- โ Sensitive to initial centroid placement
6.3 Linear Regression
Python Example:
from grnexus import LinearRegression
# Training data (house prices example)
x_train = [
[1200, 3], # [square_feet, bedrooms]
[1500, 3],
[1800, 4],
[2000, 4],
[2200, 5]
]
y_train = [200000, 250000, 300000, 350000, 400000] # prices
# Create and train
lr = LinearRegression()
lr.fit(x_train, y_train)
# Predict
new_houses = [[1600, 3], [2100, 4]]
predictions = lr.predict(new_houses)
print(f"Predicted prices: {predictions}")
# Get model coefficients
print(f"Coefficients: {lr.coef_}")
print(f"Intercept: {lr.intercept_}")
# Calculate Rยฒ score
r2 = lr.score(x_train, y_train)
print(f"Rยฒ score: {r2:.4f}")
# Save and load
lr.save('linear_regression.lnexus')
lr_loaded = LinearRegression.load('linear_regression.lnexus')Ruby Example:
require_relative 'ruby/grnexus'
# Training data (house prices example)
x_train = [
[1200, 3], # [square_feet, bedrooms]
[1500, 3],
[1800, 4],
[2000, 4],
[2200, 5]
]
y_train = [200000, 250000, 300000, 350000, 400000] # prices
# Create and train
lr = GRNEXUSMachineLearning::LinearRegression.new
lr.fit(x_train, y_train)
# Predict
new_houses = [[1600, 3], [2100, 4]]
predictions = lr.predict(new_houses)
puts "Predicted prices: #{predictions.inspect}"
# Get model coefficients
puts "Coefficients: #{lr.coef.inspect}"
puts "Intercept: #{lr.intercept}"
# Calculate Rยฒ score
r2 = lr.score(x_train, y_train)
puts "Rยฒ score: #{r2.round(4)}"
# Save and load
lr.save('linear_regression.lnexus')
lr_loaded = GRNEXUSMachineLearning::LinearRegression.load('linear_regression.lnexus')6.4 Logistic Regression
Python Example:
from grnexus import LogisticRegression
# Binary classification data
x_train = [
[1.0, 2.0], [2.0, 3.0], [3.0, 4.0], # Class 0
[8.0, 8.0], [9.0, 9.0], [10.0, 10.0] # Class 1
]
y_train = [0, 0, 0, 1, 1, 1]
# Create and train
logreg = LogisticRegression(learning_rate=0.1, max_iters=1000)
logreg.fit(x_train, y_train)
# Predict
test_points = [[2.5, 3.5], [9.0, 8.5]]
predictions = logreg.predict(test_points)
print(f"Predictions: {predictions}") # [0, 1]
# Get probabilities
probabilities = logreg.predict_proba(test_points)
print(f"Probabilities: {probabilities}")
# Save and load
logreg.save('logistic_regression.lnexus')
logreg_loaded = LogisticRegression.load('logistic_regression.lnexus')Ruby Example:
require_relative 'ruby/grnexus'
# Binary classification data
x_train = [
[1.0, 2.0], [2.0, 3.0], [3.0, 4.0], # Class 0
[8.0, 8.0], [9.0, 9.0], [10.0, 10.0] # Class 1
]
y_train = [0, 0, 0, 1, 1, 1]
# Create and train
logreg = GRNEXUSMachineLearning::LogisticRegression.new(learning_rate: 0.1, max_iters: 1000)
logreg.fit(x_train, y_train)
# Predict
test_points = [[2.5, 3.5], [9.0, 8.5]]
predictions = logreg.predict(test_points)
puts "Predictions: #{predictions.inspect}" # [0, 1]
# Get probabilities
probabilities = logreg.predict_proba(test_points)
puts "Probabilities: #{probabilities.inspect}"
# Save and load
logreg.save('logistic_regression.lnexus')
logreg_loaded = GRNEXUSMachineLearning::LogisticRegression.load('logistic_regression.lnexus')6.5 Gaussian Naive Bayes
Python Example:
from grnexus import GaussianNB
# Training data
x_train = [
[1.0, 2.0], [1.5, 1.8], [2.0, 2.5], # Class 0
[8.0, 8.0], [8.5, 8.2], [9.0, 9.0] # Class 1
]
y_train = [0, 0, 0, 1, 1, 1]
# Create and train
gnb = GaussianNB()
gnb.fit(x_train, y_train)
# Predict
test_points = [[2.0, 2.0], [8.5, 8.5]]
predictions = gnb.predict(test_points)
print(f"Predictions: {predictions}") # [0, 1]
# Get probabilities
probabilities = gnb.predict_proba(test_points)
print(f"Probabilities: {probabilities}")
# Save and load
gnb.save('naive_bayes.lnexus')
gnb_loaded = GaussianNB.load('naive_bayes.lnexus')Ruby Example:
require_relative 'ruby/grnexus'
# Training data
x_train = [
[1.0, 2.0], [1.5, 1.8], [2.0, 2.5], # Class 0
[8.0, 8.0], [8.5, 8.2], [9.0, 9.0] # Class 1
]
y_train = [0, 0, 0, 1, 1, 1]
# Create and train
gnb = GRNEXUSMachineLearning::GaussianNB.new
gnb.fit(x_train, y_train)
# Predict
test_points = [[2.0, 2.0], [8.5, 8.5]]
predictions = gnb.predict(test_points)
puts "Predictions: #{predictions.inspect}" # [0, 1]
# Get probabilities
probabilities = gnb.predict_proba(test_points)
puts "Probabilities: #{probabilities.inspect}"
# Save and load
gnb.save('naive_bayes.lnexus')
gnb_loaded = GRNEXUSMachineLearning::GaussianNB.load('naive_bayes.lnexus')Cross-Language ML Model Compatibility
Just like neural networks, classical ML models are fully compatible across languages:
# Train in Python
from grnexus import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(x_train, y_train)
knn.save('shared_knn.lnexus')# Load and use in Ruby
knn = GRNEXUSMachineLearning::KNeighborsClassifier.load('shared_knn.lnexus')
predictions = knn.predict(test_data)
puts "Predictions from Python model: #{predictions.inspect}"Important Notes:
- Neural networks use
.nexusformat - Classical ML models use
.lnexusformat - Both formats are cross-language compatible
- Attempting to load the wrong format will raise a clear error message
7. Model Inspection (Without Loading!)
One of GRNexus's unique features: inspect models without loading them into memory.
# Inspect any .nexus model file
GRNexus::NeuralNetwork.inspect_model('models/production_model.nexus')Output:
================================================================================
MODEL INSPECTION: models/production_model.nexus
================================================================================
Framework: GRNexus
Version: 2.0
Language: Python
Name: sentiment_analyzer
Created: 2025-11-24T15:30:45
Loss Function: cross_entropy
Optimizer: adam
Learning Rate: 0.001
Metadata:
Total Parameters: 11,847
Trainable Parameters: 11,847
Layers Count: 9
Architecture:
--------------------------------------------------------------------------------
Layer 1: DenseLayer
Units: 128
Activation: GELU
Trainable: true
Layer 2: BatchNormLayer
Trainable: true
Layer 3: DropoutLayer
Trainable: false
Layer 4: DenseLayer
Units: 64
Activation: Swish
Trainable: true
Layer 5: BatchNormLayer
Trainable: true
Layer 6: DenseLayer
Units: 32
Activation: Mish
Trainable: true
Layer 7: DenseLayer
Units: 16
Activation: ReLU
Trainable: true
Layer 8: DropoutLayer
Trainable: false
Layer 9: DenseLayer
Units: 2
Activation: Softmax
Trainable: true
Training History:
Epochs trained: 50
Final loss: 0.1234
Final accuracy: 95.67%
================================================================================
Use cases:
- ๐ Quick model analysis without loading
- ๐ Compare multiple models
- ๐ Debug architecture issues
- ๐ Generate model documentation
- ๐ Verify cross-language compatibility
๐งช Comprehensive Testing
GRNexus comes with 6 complete test suites covering every feature:
Run All Tests (One Command!)
# Windows
windows_run.bat
# macOS
chmod +x mac.sh && ./mac.sh
# Linux
chmod +x linux.sh && ./linux.shIndividual Test Suites
| Test Suite | Command | What It Tests |
|---|---|---|
| Ruby Advanced | ruby ruby/test/test_advanced_complete.rb |
Text generation, sentiment analysis, deep networks, callbacks |
| Ruby Architectures | ruby ruby/test/test_complex_architectures.rb |
Complex architectures, all activations, numeric ops |
| Ruby โ Python | ruby ruby/test/test_load_python_models.rb |
Loading Python models in Ruby, cross-language compatibility |
| Python Advanced | python python/test/test_advanced_complete.py |
Text generation, sentiment analysis, deep networks, callbacks |
| Python Architectures | python python/test/test_complex_architectures.py |
Complex architectures, all activations, numeric ops |
| Python โ Ruby | python python/test/test_load_ruby_models.py |
Loading Ruby models in Python, cross-language compatibility |
Test Coverage
โ
Text Processing (NLP)
โโ Vocabulary creation
โโ Tokenization
โโ TF-IDF vectorization
โโ Text embeddings
โโ Document similarity
โ
Numeric Processing
โโ Statistical operations (mean, std, variance)
โโ Normalization (Z-score, MinMax)
โโ Time series (moving average, differences)
โโ Array operations (40+ functions)
โ
Neural Networks
โโ 35+ activation functions
โโ 12+ layer types
โโ Multiple loss functions
โโ Multiple optimizers
โโ Batch training
โ
Cross-Language
โโ Ruby โ Python model loading
โโ Python โ Ruby model loading
โโ Continue training across languages
โโ Model inspection
โ
Smart Training
โโ EarlyStopping callback
โโ ReduceLROnPlateau callback
โโ ModelCheckpoint callback
โโ Custom callbacks
โ
Model Management
โโ Save/Load models
โโ Model inspection
โโ Architecture summary
โโ Parameter counting
๐ Advanced Architectures & Best Practices
Multi-Task Learning
Python - Shared Layers with Multiple Outputs:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer, DropoutLayer
from lib.grnexus_activations import ReLU
from lib.grnexus_normalization import Softmax
# Build a model with shared feature extraction
# Task 1: Sentiment classification (positive/negative)
# Task 2: Topic classification (tech/sports/politics)
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
# Shared layers (feature extraction)
model.add(DenseLayer(128, 100, activation=ReLU()))
model.add(DropoutLayer(rate=0.3))
model.add(DenseLayer(64, 128, activation=ReLU()))
# Task-specific output layers can be added separately
# For multi-task, train on combined loss
model.add(DenseLayer(5, 64, activation=Softmax())) # Combined output
model.train(x_train, y_train, epochs=50, batch_size=32)Ruby - Shared Layers with Multiple Outputs:
require_relative 'ruby/grnexus'
# Build a model with shared feature extraction
# Task 1: Sentiment classification (positive/negative)
# Task 2: Topic classification (tech/sports/politics)
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
# Shared layers (feature extraction)
model.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 100, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.3))
model.add(GRNEXUSLayer::DenseLayer.new(units: 64, input_dim: 128, activation: GRNEXUSActivations::ReLU.new))
# Task-specific output layers can be added separately
# For multi-task, train on combined loss
model.add(GRNEXUSLayer::DenseLayer.new(units: 5, input_dim: 64, activation: GRNEXUSNormalization::Softmax.new)) # Combined output
model.train(x_train, y_train, epochs: 50, batch_size: 32)Transfer Learning Pattern
Python - Feature Extraction:
# Step 1: Train base model on large dataset
base_model = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
base_model.add(DenseLayer(256, 1000, activation=ReLU()))
base_model.add(DenseLayer(128, 256, activation=ReLU()))
base_model.add(DenseLayer(64, 128, activation=ReLU()))
base_model.add(DenseLayer(10, 64, activation=Softmax()))
base_model.train(large_dataset_x, large_dataset_y, epochs=100)
base_model.save('base_model.nexus')
# Step 2: Load and fine-tune on specific task
transfer_model = NeuralNetwork.load('base_model.nexus')
# Continue training with smaller learning rate
transfer_model.learning_rate = 0.0001
transfer_model.train(specific_task_x, specific_task_y, epochs=20)Ruby - Transfer Learning:
# Step 1: Train base model on large dataset
base_model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
base_model.add(GRNEXUSLayer::DenseLayer.new(units: 256, input_dim: 1000, activation: GRNEXUSActivations::ReLU.new))
base_model.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 256, activation: GRNEXUSActivations::ReLU.new))
base_model.add(GRNEXUSLayer::DenseLayer.new(units: 64, input_dim: 128, activation: GRNEXUSActivations::ReLU.new))
base_model.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 64, activation: GRNEXUSNormalization::Softmax.new))
base_model.train(large_dataset_x, large_dataset_y, epochs: 100)
base_model.save('base_model.nexus')
# Step 2: Load and fine-tune on specific task
transfer_model = GRNexus::NeuralNetwork.load('base_model.nexus')
# Continue training with smaller learning rate
transfer_model.learning_rate = 0.0001
transfer_model.train(specific_task_x, specific_task_y, epochs: 20)Ensemble Learning
Python - Model Ensemble:
# Train multiple models with different architectures
models = []
# Model 1: Deep network
model1 = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
model1.add(DenseLayer(128, 50, activation=ReLU()))
model1.add(DenseLayer(64, 128, activation=ReLU()))
model1.add(DenseLayer(10, 64, activation=Softmax()))
model1.train(x_train, y_train, epochs=50)
models.append(model1)
# Model 2: Wide network
model2 = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
model2.add(DenseLayer(256, 50, activation=ReLU()))
model2.add(DenseLayer(10, 256, activation=Softmax()))
model2.train(x_train, y_train, epochs=50)
models.append(model2)
# Model 3: Different activation
model3 = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
model3.add(DenseLayer(128, 50, activation=GELU()))
model3.add(DenseLayer(64, 128, activation=Swish()))
model3.add(DenseLayer(10, 64, activation=Softmax()))
model3.train(x_train, y_train, epochs=50)
models.append(model3)
# Ensemble prediction (voting)
def ensemble_predict(models, x):
predictions = [model.predict(x) for model in models]
# Average predictions
ensemble_pred = [[sum(p[i][j] for p in predictions) / len(predictions)
for j in range(len(predictions[0][i]))]
for i in range(len(predictions[0]))]
return ensemble_pred
# Use ensemble
test_predictions = ensemble_predict(models, x_test)Ruby - Model Ensemble:
require_relative 'ruby/grnexus'
# Train multiple models with different architectures
models = []
# Model 1: Deep network
model1 = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
model1.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 50, activation: GRNEXUSActivations::ReLU.new))
model1.add(GRNEXUSLayer::DenseLayer.new(units: 64, input_dim: 128, activation: GRNEXUSActivations::ReLU.new))
model1.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 64, activation: GRNEXUSNormalization::Softmax.new))
model1.train(x_train, y_train, epochs: 50)
models << model1
# Model 2: Wide network
model2 = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
model2.add(GRNEXUSLayer::DenseLayer.new(units: 256, input_dim: 50, activation: GRNEXUSActivations::ReLU.new))
model2.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 256, activation: GRNEXUSNormalization::Softmax.new))
model2.train(x_train, y_train, epochs: 50)
models << model2
# Model 3: Different activation
model3 = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.001)
model3.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 50, activation: GRNEXUSActivations::GELU.new))
model3.add(GRNEXUSLayer::DenseLayer.new(units: 64, input_dim: 128, activation: GRNEXUSActivations::Swish.new))
model3.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 64, activation: GRNEXUSNormalization::Softmax.new))
model3.train(x_train, y_train, epochs: 50)
models << model3
# Ensemble prediction (voting)
def ensemble_predict(models, x)
predictions = models.map { |model| model.predict(x) }
# Average predictions
ensemble_pred = []
predictions[0].length.times do |i|
sample_pred = []
predictions[0][i].length.times do |j|
avg = predictions.map { |p| p[i][j] }.sum / predictions.length.to_f
sample_pred << avg
end
ensemble_pred << sample_pred
end
ensemble_pred
end
# Use ensemble
test_predictions = ensemble_predict(models, x_test)
puts "Ensemble predictions: #{test_predictions.length} samples"Hyperparameter Tuning
Python - Grid Search Pattern:
from grnexus import NeuralNetwork
from lib.grnexus_layers import DenseLayer, DropoutLayer
from lib.grnexus_activations import ReLU
# Define hyperparameter grid
learning_rates = [0.001, 0.01, 0.1]
dropout_rates = [0.2, 0.3, 0.5]
hidden_units = [64, 128, 256]
best_accuracy = 0
best_params = {}
# Grid search
for lr in learning_rates:
for dropout in dropout_rates:
for units in hidden_units:
print(f"Testing: lr={lr}, dropout={dropout}, units={units}")
model = NeuralNetwork(loss='cross_entropy', learning_rate=lr)
model.add(DenseLayer(units, 50, activation=ReLU()))
model.add(DropoutLayer(rate=dropout))
model.add(DenseLayer(10, units, activation=Softmax()))
model.train(x_train, y_train, epochs=20, batch_size=32, verbose=False)
loss, accuracy = model.evaluate(x_val, y_val)
if accuracy > best_accuracy:
best_accuracy = accuracy
best_params = {'lr': lr, 'dropout': dropout, 'units': units}
model.save('best_model.nexus')
print(f"Best params: {best_params}")
print(f"Best accuracy: {best_accuracy:.2f}%")Ruby - Grid Search Pattern:
require_relative 'ruby/grnexus'
# Define hyperparameter grid
learning_rates = [0.001, 0.01, 0.1]
dropout_rates = [0.2, 0.3, 0.5]
hidden_units = [64, 128, 256]
best_accuracy = 0
best_params = {}
# Grid search
learning_rates.each do |lr|
dropout_rates.each do |dropout|
hidden_units.each do |units|
puts "Testing: lr=#{lr}, dropout=#{dropout}, units=#{units}"
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: lr)
model.add(GRNEXUSLayer::DenseLayer.new(units: units, input_dim: 50, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::DropoutLayer.new(rate: dropout))
model.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: units, activation: GRNEXUSNormalization::Softmax.new))
model.train(x_train, y_train, epochs: 20, batch_size: 32, verbose: false)
loss, accuracy = model.evaluate(x_val, y_val)
if accuracy > best_accuracy
best_accuracy = accuracy
best_params = {lr: lr, dropout: dropout, units: units}
model.save('best_model.nexus')
end
end
end
end
puts "Best params: #{best_params.inspect}"
puts "Best accuracy: #{best_accuracy.round(2)}%"Best Practices Summary
1. Data Preparation:
Python:
# Always normalize/standardize your data
from lib.grnexus_numeric_proccessing import ZScoreNormalize
normalizer = ZScoreNormalize()
x_train_normalized = [normalizer.process(sample) for sample in x_train]Ruby:
# Always normalize/standardize your data
normalizer = GRNEXUSNumericProcessing::ZScoreNormalize.new
x_train_normalized = x_train.map { |sample| normalizer.process(sample) }2. Train/Validation/Test Split:
Python:
# Split data properly
train_size = int(0.7 * len(data))
val_size = int(0.15 * len(data))
x_train = data[:train_size]
x_val = data[train_size:train_size+val_size]
x_test = data[train_size+val_size:]Ruby:
# Split data properly
train_size = (0.7 * data.length).to_i
val_size = (0.15 * data.length).to_i
x_train = data[0...train_size]
x_val = data[train_size...(train_size + val_size)]
x_test = data[(train_size + val_size)..-1]3. Use Callbacks:
Python:
from lib.grnexus_callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
callbacks = [
EarlyStopping(patience=10, restore_best_weights=True),
ReduceLROnPlateau(factor=0.5, patience=5),
ModelCheckpoint('best_model.nexus', save_best_only=True)
]
model.train(x_train, y_train, validation_data=(x_val, y_val), callbacks=callbacks)Ruby:
early_stop = GRNEXUSCallbacks::EarlyStopping.new(patience: 10, restore_best_weights: true)
lr_reduce = GRNEXUSCallbacks::ReduceLROnPlateau.new(factor: 0.5, patience: 5)
checkpoint = GRNEXUSCallbacks::ModelCheckpoint.new(
filepath: 'best_model.nexus',
save_best_only: true
)
callbacks = [early_stop, lr_reduce, checkpoint]
model.train(x_train, y_train, validation_data: [x_val, y_val], callbacks: callbacks)4. Regularization:
Python:
# Use dropout and batch normalization
model.add(DenseLayer(128, 64, activation=ReLU()))
model.add(BatchNormLayer())
model.add(DropoutLayer(rate=0.3))Ruby:
# Use dropout and batch normalization
model.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 64, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::BatchNormLayer.new)
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.3))5. Monitor Training:
Python:
# Always use validation data and verbose mode during development
history = model.train(
x_train, y_train,
validation_data=(x_val, y_val),
epochs=100,
batch_size=32,
verbose=True
)
# Plot training history (if using matplotlib)
import matplotlib.pyplot as plt
plt.plot(history['loss'], label='Training Loss')
plt.plot(history['val_loss'], label='Validation Loss')
plt.legend()
plt.show()Ruby:
# Always use validation data and verbose mode during development
history = model.train(
x_train, y_train,
validation_data: [x_val, y_val],
epochs: 100,
batch_size: 32,
verbose: true
)
# Access training history
puts "Final training loss: #{history['loss'].last.round(4)}"
puts "Final validation loss: #{history['val_loss'].last.round(4)}"6. Save Checkpoints:
Python:
# Save models at different stages
model.save('model_epoch_10.nexus')
# Continue training
model.train(x_train, y_train, epochs=10)
model.save('model_epoch_20.nexus')Ruby:
# Save models at different stages
model.save('model_epoch_10.nexus')
# Continue training
model.train(x_train, y_train, epochs: 10)
model.save('model_epoch_20.nexus')7. Cross-Language Development:
# Python team: Train and save
model.train(x_train, y_train, epochs=50)
model.save('shared_model.nexus')# Ruby team: Load and deploy
model = GRNexus::NeuralNetwork.load('shared_model.nexus')
predictions = model.predict(production_data)๐ API Reference
Core Classes
NeuralNetwork - The Heart of GRNexus
The NeuralNetwork class is the main interface for building, training, and deploying neural networks.
Constructor Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
loss |
string | 'mse' |
Loss function: 'mse', 'cross_entropy', 'binary_cross_entropy'
|
optimizer |
string | 'sgd' |
Optimizer: 'sgd', 'adam', 'rmsprop', 'adagrad'
|
learning_rate |
float | 0.01 |
Initial learning rate for training |
name |
string | 'model' |
Model name for identification |
Ruby - Complete API:
require_relative 'ruby/grnexus'
# 1. CREATE MODEL
model = GRNexus::NeuralNetwork.new(
loss: 'cross_entropy', # Loss function
optimizer: 'adam', # Optimizer algorithm
learning_rate: 0.001, # Learning rate
name: 'my_classifier' # Model name
)
# 2. BUILD ARCHITECTURE
model.add(GRNEXUSLayer::DenseLayer.new(units: 128, input_dim: 50, activation: GRNEXUSActivations::ReLU.new))
model.add(GRNEXUSLayer::BatchNormLayer.new)
model.add(GRNEXUSLayer::DropoutLayer.new(rate: 0.3))
model.add(GRNEXUSLayer::DenseLayer.new(units: 10, input_dim: 128, activation: GRNEXUSNormalization::Softmax.new))
# 3. VIEW ARCHITECTURE
model.summary
# Shows: layers, parameters, output shapes
# 4. COMPILE (optional, auto-compiled on first train)
model.compile(loss: 'cross_entropy')
# 5. TRAIN MODEL
history = model.train(
x_train, y_train,
epochs: 100, # Number of epochs
batch_size: 32, # Batch size
validation_data: [x_val, y_val], # Optional validation
callbacks: [early_stop, checkpoint], # Optional callbacks
verbose: true # Show progress
)
# 6. EVALUATE MODEL
loss, accuracy = model.evaluate(x_test, y_test)
puts "Test Loss: #{loss.round(4)}"
puts "Test Accuracy: #{accuracy.round(2)}%"
# 7. MAKE PREDICTIONS
predictions = model.predict(x_new)
# Returns: Array of predictions
# Single prediction
single_pred = model.predict([x_new[0]])[0]
# 8. SAVE MODEL
model.save('models/my_model.nexus')
# Saves: architecture, weights, training history, metadata
# 9. LOAD MODEL
loaded_model = GRNexus::NeuralNetwork.load('models/my_model.nexus')
# Fully functional, ready to predict or continue training
# 10. INSPECT MODEL (without loading!)
GRNexus::NeuralNetwork.inspect_model('models/my_model.nexus')
# Shows: architecture, parameters, metadata, training history
# 11. ACCESS MODEL PROPERTIES
puts model.name # Model name
puts model.loss # Loss function
puts model.optimizer # Optimizer
puts model.learning_rate # Current learning rate
puts model.layers.length # Number of layers
# 12. MODIFY LEARNING RATE
model.learning_rate = 0.0001 # Reduce for fine-tuning
# 13. GET TRAINING HISTORY
puts history[:loss] # Training loss per epoch
puts history[:accuracy] # Training accuracy per epoch
puts history[:val_loss] # Validation loss per epoch
puts history[:val_accuracy] # Validation accuracy per epochPython - Complete API:
from grnexus import NeuralNetwork
from lib.grnexus_layers import *
from lib.grnexus_activations import *
from lib.grnexus_normalization import Softmax
# 1. CREATE MODEL
model = NeuralNetwork(
loss='cross_entropy', # Loss function
optimizer='adam', # Optimizer algorithm
learning_rate=0.001, # Learning rate
name='my_classifier' # Model name
)
# 2. BUILD ARCHITECTURE
model.add(DenseLayer(units=128, input_dim=50, activation=ReLU()))
model.add(BatchNormLayer())
model.add(DropoutLayer(rate=0.3))
model.add(DenseLayer(units=10, input_dim=128, activation=Softmax()))
# 3. VIEW ARCHITECTURE
model.summary()
# Shows: layers, parameters, output shapes
# 4. COMPILE (optional, auto-compiled on first train)
model.compile(loss='cross_entropy')
# 5. TRAIN MODEL
history = model.train(
x_train, y_train,
epochs=100, # Number of epochs
batch_size=32, # Batch size
validation_data=(x_val, y_val), # Optional validation
callbacks=[early_stop, checkpoint], # Optional callbacks
verbose=True # Show progress
)
# 6. EVALUATE MODEL
loss, accuracy = model.evaluate(x_test, y_test)
print(f"Test Loss: {loss:.4f}")
print(f"Test Accuracy: {accuracy:.2f}%")
# 7. MAKE PREDICTIONS
predictions = model.predict(x_new)
# Returns: List of predictions
# Single prediction
single_pred = model.predict([x_new[0]])[0]
# 8. SAVE MODEL
model.save('models/my_model.nexus')
# Saves: architecture, weights, training history, metadata
# 9. LOAD MODEL
loaded_model = NeuralNetwork.load('models/my_model.nexus')
# Fully functional, ready to predict or continue training
# 10. INSPECT MODEL (without loading!)
NeuralNetwork.inspect_model('models/my_model.nexus')
# Shows: architecture, parameters, metadata, training history
# 11. ACCESS MODEL PROPERTIES
print(model.name) # Model name
print(model.loss) # Loss function
print(model.optimizer) # Optimizer
print(model.learning_rate) # Current learning rate
print(len(model.layers)) # Number of layers
# 12. MODIFY LEARNING RATE
model.learning_rate = 0.0001 # Reduce for fine-tuning
# 13. GET TRAINING HISTORY
print(history['loss']) # Training loss per epoch
print(history['accuracy']) # Training accuracy per epoch
print(history['val_loss']) # Validation loss per epoch
print(history['val_accuracy']) # Validation accuracy per epochComplete Training Example with All Features:
from grnexus import NeuralNetwork
from lib.grnexus_layers import *
from lib.grnexus_activations import *
from lib.grnexus_callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
# Create model
model = NeuralNetwork(
loss='cross_entropy',
optimizer='adam',
learning_rate=0.001,
name='production_model'
)
# Build architecture
model.add(DenseLayer(256, 100, activation=ReLU()))
model.add(BatchNormLayer())
model.add(DropoutLayer(rate=0.4))
model.add(DenseLayer(128, 256, activation=ReLU()))
model.add(BatchNormLayer())
model.add(DropoutLayer(rate=0.3))
model.add(DenseLayer(64, 128, activation=ReLU()))
model.add(DenseLayer(10, 64, activation=Softmax()))
# View architecture before training
print("\n" + "="*80)
model.summary()
print("="*80 + "\n")
# Setup callbacks
callbacks = [
EarlyStopping(
monitor='val_loss',
patience=10,
verbose=True
),
ReduceLROnPlateau(
monitor='val_loss',
factor=0.5,
patience=5,
min_lr=0.00001,
verbose=True
),
ModelCheckpoint(
filepath='models/best_model.nexus',
monitor='val_accuracy',
save_best_only=True,
verbose=True
)
]
# Train with all features
print("Starting training...")
history = model.train(
x_train, y_train,
epochs=100,
batch_size=32,
validation_data=(x_val, y_val),
callbacks=callbacks,
verbose=True
)
# Evaluate
print("\nEvaluating model...")
test_loss, test_accuracy = model.evaluate(x_test, y_test)
print(f"Final Test Loss: {test_loss:.4f}")
print(f"Final Test Accuracy: {test_accuracy:.2f}%")
# Save final model
model.save('models/final_model.nexus')
print("โ Model saved successfully")
# Analyze training history
print("\nTraining Summary:")
print(f" Best validation accuracy: {max(history['val_accuracy']):.2f}%")
print(f" Final training loss: {history['loss'][-1]:.4f}")
print(f" Total epochs: {len(history['loss'])}")
# Make predictions
print("\nMaking predictions on new data...")
predictions = model.predict(x_new)
for i, pred in enumerate(predictions[:5]): # Show first 5
predicted_class = pred.index(max(pred))
confidence = max(pred) * 100
print(f" Sample {i+1}: Class {predicted_class} (confidence: {confidence:.2f}%)")Text Processing
# Ruby
vocab = GRNexusTextProcessing::Vocabulary.new(documents, max_vocab_size: 1000)
indices = vocab.normalize_text(text, max_length: 20)
text = vocab.denormalize_indices(indices)
vectorizer = GRNexusTextProcessing::TextVectorizer.new(vocab)
vector = vectorizer.vectorize(text)
embeddings = GRNexusTextProcessing::TextEmbeddings.new(vocab, embedding_dim: 100)
similar_indices, similarities = embeddings.find_similar(token_idx, top_k: 10)# Python
vocab = Vocabulary(documents, max_vocab_size=1000)
indices = vocab.normalize_text(text, max_length=20)
text = vocab.denormalize_indices(indices)
vectorizer = TextVectorizer(vocab)
vector = vectorizer.vectorize(text)
embeddings = TextEmbeddings(vocab, embedding_dim=100)
similar_indices, similarities = embeddings.find_similar(token_idx, top_k=10)Numeric Processing
# Ruby
# Statistical operations
mean = GRNEXUSNumericProcessing::MeanArray.new.process(data)
std = GRNEXUSNumericProcessing::StdArray.new.process(data)
# Normalization
zscore = GRNEXUSNumericProcessing::ZScoreNormalize.new
normalized = zscore.process(data)
minmax = GRNEXUSNumericProcessing::MinMaxNormalize.new(min_range: 0.0, max_range: 1.0)
normalized = minmax.process(data)
# Time series
ma = GRNEXUSNumericProcessing::MovingAverage.new(window_size: 5)
smoothed = ma.process(time_series)
diff = GRNEXUSNumericProcessing::FiniteDifference.new
differences = diff.process(data)# Python
# Statistical operations
mean = MeanArray().process(data)
std = StdArray().process(data)
# Normalization
zscore = ZScoreNormalize()
normalized = zscore.process(data)
minmax = MinMaxNormalize(min_range=0.0, max_range=1.0)
normalized = minmax.process(data)
# Time series
ma = MovingAverage(window_size=5)
smoothed = ma.process(time_series)
diff = FiniteDifference()
differences = diff.process(data)๐ฏ Complete Layer & Activation Reference
All Available Layers
Ruby - Complete Layer Examples
require_relative 'ruby/grnexus'
model = GRNexus::NeuralNetwork.new(loss: 'cross_entropy', learning_rate: 0.01)
# 1. DenseLayer (Fully Connected)
model.add(GRNEXUSLayer::DenseLayer.new(
units: 128,
input_dim: 64,
activation: GRNEXUSActivations::ReLU.new
))
# 2. ActivationLayer (Standalone)
model.add(GRNEXUSLayer::ActivationLayer.new(
GRNEXUSActivations::Tanh.new
))
# 3. DropoutLayer (Regularization)
model.add(GRNEXUSLayer::DropoutLayer.new(
rate: 0.5 # Drop 50% of neurons during training
))
# 4. BatchNormLayer (Normalization)
model.add(GRNEXUSLayer::BatchNormLayer.new(
epsilon: 1e-5,
momentum: 0.1
))
# 5. Conv2DLayer (Convolutional)
model.add(GRNEXUSLayer::Conv2DLayer.new(
filters: 32,
kernel_size: 3,
stride: 1,
padding: 'same',
activation: GRNEXUSActivations::ReLU.new
))
# 6. MaxPoolingLayer (Downsampling)
# Reduces spatial dimensions by taking maximum value in each pool
# Input: 2D image [[...], [...]] or batch of 2D images [[[...], [...]], [[...], [...]]]
model.add(GRNEXUSLayer::MaxPoolingLayer.new(
pool_size: 2, # Can be integer or [height, width]
stride: 2 # Can be integer or [height, width], defaults to pool_size
))
# 7. LSTMLayer (Recurrent)
model.add(GRNEXUSLayer::LSTMLayer.new(
units: 64,
return_sequences: true
))
# 8. GRULayer (Recurrent)
model.add(GRNEXUSLayer::GRULayer.new(
units: 64,
return_sequences: false
))
# 9. EmbeddingLayer (Word Embeddings)
model.add(GRNEXUSLayer::EmbeddingLayer.new(
vocab_size: 10000,
embedding_dim: 128
))
# 10. FlattenLayer (Reshape to 1D)
model.add(GRNEXUSLayer::FlattenLayer.new)
# 11. ReshapeLayer (Custom Shape)
model.add(GRNEXUSLayer::ReshapeLayer.new(
target_shape: [28, 28, 1]
))
# 12. SoftmaxLayer (Probability Distribution)
model.add(GRNEXUSLayer::SoftmaxLayer.new)Python - Complete Layer Examples
from grnexus import NeuralNetwork
from lib.grnexus_layers import *
from lib.grnexus_activations import *
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.01)
# 1. DenseLayer (Fully Connected)
model.add(DenseLayer(
units=128,
input_dim=64,
activation=ReLU()
))
# 2. ActivationLayer (Standalone)
model.add(ActivationLayer(Tanh()))
# 3. DropoutLayer (Regularization)
model.add(DropoutLayer(
rate=0.5 # Drop 50% of neurons during training
))
# 4. BatchNormLayer (Normalization)
model.add(BatchNormLayer(
epsilon=1e-5,
momentum=0.1
))
# 5. Conv2DLayer (Convolutional)
model.add(Conv2DLayer(
filters=32,
kernel_size=3,
stride=1,
padding='same',
activation=ReLU()
))
# 6. MaxPoolingLayer (Downsampling)
# Reduces spatial dimensions by taking maximum value in each pool
# Input: 2D image [[...], [...]] or batch of 2D images [[[...], [...]], [[...], [...]]]
model.add(MaxPoolingLayer(
pool_size=2, # Can be integer or [height, width]
stride=2 # Can be integer or [height, width], defaults to pool_size
))
# 7. LSTMLayer (Recurrent)
model.add(LSTMLayer(
units=64,
return_sequences=True
))
# 8. GRULayer (Recurrent)
model.add(GRULayer(
units=64,
return_sequences=False
))
# 9. EmbeddingLayer (Word Embeddings)
model.add(EmbeddingLayer(
vocab_size=10000,
embedding_dim=128
))
# 10. FlattenLayer (Reshape to 1D)
model.add(FlattenLayer())
# 11. ReshapeLayer (Custom Shape)
model.add(ReshapeLayer(
target_shape=(28, 28, 1)
))
# 12. SoftmaxLayer (Probability Distribution)
model.add(SoftmaxLayer())All Available Activations (35+)
Ruby - All Activation Functions
require_relative 'ruby/grnexus'
# ============================================================================
# BASIC ACTIVATIONS
# ============================================================================
# Linear (Identity)
GRNEXUSActivations::Linear.new
# Step (Binary)
GRNEXUSActivations::Step.new
# Sigmoid (0 to 1)
GRNEXUSActivations::Sigmoid.new
# Tanh (-1 to 1)
GRNEXUSActivations::Tanh.new
# ReLU (Rectified Linear Unit)
GRNEXUSActivations::ReLU.new
# ============================================================================
# MODERN ACTIVATIONS (State-of-the-art)
# ============================================================================
# GELU (Gaussian Error Linear Unit) - Used in GPT, BERT
GRNEXUSActivations::GELU.new
# Swish (Self-Gated) - Google's discovery
GRNEXUSActivations::Swish.new
# Mish (Self-Regularized) - State-of-the-art
GRNEXUSActivations::Mish.new
# LiSHT (Linearly Scaled Hyperbolic Tangent)
GRNEXUSActivations::LiSHT.new
# SiLU (Sigmoid Linear Unit) - Same as Swish
GRNEXUSActivations::SiLU.new
# ============================================================================
# PARAMETRIC ACTIVATIONS
# ============================================================================
# LeakyReLU (Leaky Rectified Linear Unit)
GRNEXUSActivations::LeakyReLU.new(alpha: 0.01)
# PReLU (Parametric ReLU)
GRNEXUSActivations::PReLU.new(alpha: 0.25)
# ELU (Exponential Linear Unit)
GRNEXUSActivations::ELU.new(alpha: 1.0)
# SELU (Scaled Exponential Linear Unit) - Self-normalizing
GRNEXUSActivations::SELU.new
# CELU (Continuously Differentiable ELU)
GRNEXUSActivations::CELU.new(alpha: 1.0)
# ============================================================================
# SPECIALIZED ACTIVATIONS
# ============================================================================
# Maxout
GRNEXUSActivations::Maxout.new
# Minout
GRNEXUSActivations::Minout.new
# GLU (Gated Linear Unit)
GRNEXUSActivations::GLU.new
# ARelu (Adaptive ReLU)
GRNEXUSActivations::ARelu.new
# FReLU (Funnel ReLU)
GRNEXUSActivations::FReLU.new
# BReLU (Bounded ReLU)
GRNEXUSActivations::BReLU.new
# ============================================================================
# SHRINKAGE ACTIVATIONS
# ============================================================================
# HardShrink
GRNEXUSActivations::HardShrink.new(lambda: 0.5)
# SoftShrink
GRNEXUSActivations::SoftShrink.new(lambda: 0.5)
# TanhShrink
GRNEXUSActivations::TanhShrink.new
# ============================================================================
# SMOOTH ACTIVATIONS
# ============================================================================
# Softplus (Smooth ReLU)
GRNEXUSActivations::Softplus.new
# Softsign
GRNEXUSActivations::Softsign.new
# HardSigmoid
GRNEXUSActivations::HardSigmoid.new
# HardTanh
GRNEXUSActivations::HardTanh.new
# ============================================================================
# ADVANCED ACTIVATIONS
# ============================================================================
# Snake (Periodic)
GRNEXUSActivations::Snake.new(frequency: 1.0)
# SnakeBeta (Learnable Periodic)
GRNEXUSActivations::SnakeBeta.new(alpha: 1.0, beta: 1.0)
# ============================================================================
# VARIANT ACTIVATIONS
# ============================================================================
# ThresholdedReLU
GRNEXUSActivations::ThresholdedReLU.new(theta: 1.0)
# ReLU6 (Bounded ReLU)
GRNEXUSActivations::ReLU6.new
# HardSwish (Mobile-optimized)
GRNEXUSActivations::HardSwish.new
# ISRU (Inverse Square Root Unit)
GRNEXUSActivations::ISRU.new(alpha: 1.0)
# ISRLU (Inverse Square Root Linear Unit)
GRNEXUSActivations::ISRLU.new(alpha: 1.0)
# ============================================================================
# SQUARED ACTIVATIONS
# ============================================================================
# ReLUSquared
GRNEXUSActivations::ReLUSquared.new
# SquaredReLU
GRNEXUSActivations::SquaredReLU.new
# ============================================================================
# NORMALIZATION (Often used as output activations)
# ============================================================================
# Softmax (Probability distribution)
GRNEXUSNormalization::Softmax.newPython - All Activation Functions
from lib.grnexus_activations import *
from lib.grnexus_normalization import Softmax
# ============================================================================
# BASIC ACTIVATIONS
# ============================================================================
Linear() # Identity
Step() # Binary
Sigmoid() # 0 to 1
Tanh() # -1 to 1
ReLU() # Rectified Linear Unit
# ============================================================================
# MODERN ACTIVATIONS (State-of-the-art)
# ============================================================================
GELU() # Gaussian Error Linear Unit - Used in GPT, BERT
Swish() # Self-Gated - Google's discovery
Mish() # Self-Regularized - State-of-the-art
LiSHT() # Linearly Scaled Hyperbolic Tangent
SiLU() # Sigmoid Linear Unit - Same as Swish
# ============================================================================
# PARAMETRIC ACTIVATIONS
# ============================================================================
LeakyReLU(alpha=0.01) # Leaky Rectified Linear Unit
PReLU(alpha=0.25) # Parametric ReLU
ELU(alpha=1.0) # Exponential Linear Unit
SELU() # Scaled ELU - Self-normalizing
CELU(alpha=1.0) # Continuously Differentiable ELU
# ============================================================================
# SPECIALIZED ACTIVATIONS
# ============================================================================
Maxout() # Maximum of inputs
Minout() # Minimum of inputs
GLU() # Gated Linear Unit
ARelu() # Adaptive ReLU
FReLU() # Funnel ReLU
BReLU() # Bounded ReLU
# ============================================================================
# SHRINKAGE ACTIVATIONS
# ============================================================================
HardShrink(lambda_=0.5) # Hard shrinkage
SoftShrink(lambda_=0.5) # Soft shrinkage
TanhShrink() # Tanh shrinkage
# ============================================================================
# SMOOTH ACTIVATIONS
# ============================================================================
Softplus() # Smooth ReLU
Softsign() # Smooth sign
HardSigmoid() # Piecewise linear sigmoid
HardTanh() # Piecewise linear tanh
# ============================================================================
# ADVANCED ACTIVATIONS
# ============================================================================
Snake(frequency=1.0) # Periodic activation
SnakeBeta(alpha=1.0, beta=1.0) # Learnable periodic
# ============================================================================
# VARIANT ACTIVATIONS
# ============================================================================
ThresholdedReLU(theta=1.0) # ReLU with threshold
ReLU6() # Bounded ReLU (0 to 6)
HardSwish() # Mobile-optimized Swish
ISRU(alpha=1.0) # Inverse Square Root Unit
ISRLU(alpha=1.0) # Inverse Square Root Linear Unit
# ============================================================================
# SQUARED ACTIVATIONS
# ============================================================================
ReLUSquared() # ReLU then square
SquaredReLU() # Square then ReLU
# ============================================================================
# NORMALIZATION (Often used as output activations)
# ============================================================================
Softmax() # Probability distributionActivation Function Comparison
| Activation | Range | Use Case | Pros | Cons |
|---|---|---|---|---|
| ReLU | [0, โ) | General purpose | Fast, simple | Dead neurons |
| GELU | (-โ, โ) | Transformers, NLP | State-of-the-art | Slower |
| Swish | (-โ, โ) | Deep networks | Smooth, self-gated | Computationally expensive |
| Mish | (-โ, โ) | Image classification | Best accuracy | Most expensive |
| Tanh | (-1, 1) | RNNs, small networks | Zero-centered | Vanishing gradient |
| Sigmoid | (0, 1) | Binary classification | Probabilistic | Vanishing gradient |
| LeakyReLU | (-โ, โ) | Deep networks | No dead neurons | Needs tuning |
| SELU | (-โ, โ) | Self-normalizing nets | Auto-normalization | Specific initialization |
| ELU | (-ฮฑ, โ) | Deep networks | Smooth, negative values | Slower than ReLU |
| Softmax | (0, 1) | Multi-class output | Probability distribution | Only for output layer |
๐๏ธ Layer Types (12+)
| Layer | Description | Parameters | Use Case |
|---|---|---|---|
| DenseLayer | Fully connected with Xavier/He init |
units, input_dim, activation
|
Standard networks |
| ActivationLayer | Standalone activation | activation |
Flexible activation placement |
| DropoutLayer | Regularization (auto train/test mode) | rate |
Prevent overfitting |
| BatchNormLayer | Batch normalization + running stats |
epsilon, momentum
|
Stable training, faster convergence |
| Conv2DLayer | 2D convolution |
filters, kernel_size, stride
|
Image processing, CNNs |
| MaxPoolingLayer | Spatial downsampling |
pool_size, stride
|
Reduce spatial dimensions |
| LSTMLayer | Long Short-Term Memory |
units, return_sequences
|
Sequence modeling, time series |
| GRULayer | Gated Recurrent Unit |
units, return_sequences
|
Faster alternative to LSTM |
| SoftmaxLayer | Probability distribution | - | Multi-class classification |
| EmbeddingLayer | Word embeddings |
vocab_size, embedding_dim
|
NLP, text processing |
| FlattenLayer | Reshape to 1D | - | CNN to Dense transition |
| ReshapeLayer | Arbitrary reshaping | target_shape |
Flexible architecture design |
Important Notes:
MaxPoolingLayer Input Format:
-
Single 2D image:
[[1, 2, 3], [4, 5, 6], [7, 8, 9]](height ร width) -
Batch of 2D images:
[[[1, 2], [3, 4]], [[5, 6], [7, 8]]](batch ร height ร width) - The layer automatically detects whether input is a single image or batch
- After Conv2D, the output is typically in format (batch, height, width, channels), so you may need to process each channel separately or use appropriate reshaping
Example: Building a CNN:
from lib.grnexus_layers import *
model = NeuralNetwork(loss='cross_entropy', learning_rate=0.001)
model.add(Conv2DLayer(filters=32, kernel_size=3, input_shape=(28, 28, 1)))
model.add(MaxPoolingLayer(pool_size=2))
model.add(Conv2DLayer(filters=64, kernel_size=3))
model.add(MaxPoolingLayer(pool_size=2))
model.add(FlattenLayer())
model.add(DenseLayer(128, activation=ReLU()))
model.add(DropoutLayer(rate=0.5))
model.add(DenseLayer(10, activation=Softmax()))โก Performance Benchmarks
GRNexus's native C core delivers 10-100x speedup over pure Python/Ruby:
| Operation | Pure Python/Ruby | GRNexus (C) | Speedup | Notes |
|---|---|---|---|---|
| Activation (1M ops) | 850ms | 8ms | 106x โก | GELU, Swish, Mish |
| Dense Forward Pass | 320ms | 12ms | 27x โก | Matrix multiplication |
| Batch Normalization | 180ms | 6ms | 30x โก | Running stats |
| Text Vectorization | 450ms | 15ms | 30x โก | TF-IDF computation |
| Numeric Statistics | 120ms | 4ms | 30x โก | Mean, std, variance |
| Dropout (training) | 95ms | 3ms | 32x โก | Random masking |
| Model Save/Load | 250ms | 45ms | 5.5x โก | Compression + serialization |
Real-world training comparison:
Dataset: 10,000 samples, 50 features, 10 classes
Architecture: 3 hidden layers (128, 64, 32 units)
Epochs: 100
Pure Python: ~45 minutes
GRNexus: ~2.5 minutes (18x faster!)
Why so fast?
- โ Native C implementation for compute-intensive operations
- โ Optimized memory management
- โ Efficient matrix operations
- โ Zero Python/Ruby overhead in hot paths
- โ Compiled with -O3 optimization
๐ค Contributing
Contributions welcome! GRNexus is GPL-3.0 licensed.
- Fork the repository
- Create feature branch
- Commit changes
- Push and create Pull Request
๐ License
GNU General Public License v3.0 - See LICENSE
๏ฟฝ Leyarning Resources
Example Projects Included
-
XOR Problem (
ruby/example_xor.rb,python/example_xor.py)- Classic neural network introduction
- Perfect for beginners
-
Advanced Demos (
ruby/test/advanced test/,python/test/advanced test/)- Digit Recognition (GTK3 interactive app) - Draw and recognize handwritten digits
- Sentiment Analysis (3 variants) - Simple, Embeddings, and Sequence-based
- 3D Image Classifier - RGB image processing with tensors
- Complete production-ready examples
-
Text Generation (
ruby/test/test_advanced_complete.rb)- Next-word prediction
- Vocabulary management
- Sequence modeling
-
Sentiment Analysis (
python/test/test_advanced_complete.py)- Binary classification
- Text vectorization
- Real-world NLP
-
Time Series Prediction (
ruby/test/test_load_python_models.rb)- Sliding window approach
- Numeric preprocessing
- Forecasting
-
Deep Networks (All test files)
- Modern activations (GELU, Swish, Mish)
- Batch normalization
- Dropout regularization
Documentation Structure
GRNexus/
โโโ README.md # You are here!
โโโ CAMBIOS_IMPLEMENTADOS.md # Changelog (Spanish)
โโโ docs/
โ โโโ es/ # Spanish documentation
โ โโโ fr/ # French documentation
โ โโโ pt/ # Portuguese documentation
โโโ ruby/
โ โโโ grnexus.rb # Main Ruby API
โ โโโ lib/ # Ruby modules
โ โโโ example_xor.rb # Quick start example
โ โโโ test/ # Complete test suites
โโโ python/
โโโ grnexus.py # Main Python API
โโโ lib/ # Python modules
โโโ example_xor.py # Quick start example
โโโ test/ # Complete test suites
๐ Roadmap
v2.1 (Coming Soon)
- GPU acceleration (CUDA support)
- Transformer layers (attention mechanism)
- Model quantization (INT8, FP16)
- ONNX export support
- Web deployment (WASM)
v2.2 (Future)
- Distributed training
- AutoML capabilities
- Model compression
- Mobile deployment (iOS, Android)
- Real-time inference API
๐ Why Choose GRNexus v1.0?
| Feature | TensorFlow | PyTorch | GRNexus |
|---|---|---|---|
| Cross-Language | โ | โ | โ Ruby โ Python |
| Zero Dependencies | โ | โ | โ Pure + C |
| Model Inspection | โ | โ | โ Without loading |
| Learning Curve | Steep | Moderate | Gentle |
| File Size | ~500MB | ~800MB | <5MB |
| Setup Time | 10-30 min | 10-30 min | 30 seconds |
| Production Ready | โ | โ | โ |
| Performance | Excellent | Excellent | Very Good |
| Text Processing | External | External | โ Built-in |
| Numeric Ops | External | External | โ Built-in |
Perfect for:
- ๐ Learning neural networks from scratch
- ๐ Rapid prototyping
- ๐ฌ Research and experimentation
- ๐ฑ Embedded systems (low memory)
- ๐ Cross-language teams
- ๐ฏ Production deployments (small-medium scale)
Not ideal for:
- ๐ผ๏ธ Large-scale image processing (use TensorFlow/PyTorch)
- ๐ฎ Real-time video processing
- ๐ Distributed training across clusters
- ๐ฅ Cutting-edge research (transformers, diffusion models)
๐ค Contributing
We welcome contributions! GRNexus is GPL-3.0 licensed and open source.
How to Contribute
- Fork the repository
-
Create a feature branch (
git checkout -b feature/amazing-feature) -
Commit your changes (
git commit -m 'Add amazing feature') -
Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Areas We Need Help
- ๐ Documentation improvements
- ๐ Translations (more languages)
- ๐งช More test cases
- ๐ Bug reports and fixes
- โก Performance optimizations
- ๐จ Example projects
- ๐ Benchmarks
๐ License
GNU General Public License v3.0
This means you can:
- โ Use commercially
- โ Modify
- โ Distribute
- โ Use privately
But you must:
- โ ๏ธ Disclose source
- โ ๏ธ License under GPL-3.0
- โ ๏ธ State changes
See LICENSE for full details.
๐ Acknowledgments
GRNexus stands on the shoulders of giants:
- Inspiration: TensorFlow, PyTorch, Keras
- Activations: Research papers from Google, OpenAI, DeepMind
- Architecture: Modern deep learning best practices
- Community: Ruby and Python communities
๐ Support & Contact
- ๐ Issues: GitHub Issues
- ๐ฌ Discussions: GitHub Discussions
- ๐ง Email: support@grcodedigitalsolutions.com
- ๐ Website: grcodedigitalsolutions.com
โญ Star History
If you find GRNexus useful, please consider giving it a star! โญ
It helps others discover the project and motivates us to keep improving it.
๐ Ready to Build Something Amazing?
git clone https://github.com/grcodedigitalsolutions/GRNexus.git
Made with โก and โค๏ธ by GR Code Digital Solutions
Copyright ยฉ 2024-2025 GR Code Digital Solutions. Licensed under GPL-3.0.
Neural Networks โข Cross-Language โข Production Ready
