Project

tokenizer

0.04
No commit activity in last 3 years
No release in over 3 years
A simple multilingual tokenizer for NLP tasks. This tool provides a CLI and a library for linguistic tokenization which is an anavoidable step for many HLT (human language technology) tasks in the preprocessing phase for further syntactic, semantic and other higher level processing goals. Use it for tokenization of German, English and French texts.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
 Dependencies
 Project Readme

Tokenizer¶ ↑

RubyGems | Homepage | Source Code | Bug Tracker

Gem Version Build Status Code Climate Dependency Status

DESCRIPTION¶ ↑

A simple multilingual tokenizer – a linguistic tool intended to split a written text into tokens for NLP tasks. This tool provides a CLI and a library for linguistic tokenization which is an anavoidable step for many HLT (Human Language Technology) tasks in the preprocessing phase for further syntactic, semantic and other higher level processing goals.

Tokenization task involves Sentence Segmentation, Word Segmentation and Boundary Disambiguation for the both tasks.

Use it for tokenization of German, English and Dutch texts.

Implemented Algorithms¶ ↑

to be …

INSTALLATION¶ ↑

Tokenizer is provided as a .gem package. Simply install it via RubyGems.

To install tokenizer issue the following command:

$ gem install tokenizer

If you want to do a system wide installation, do this as root (possibly using sudo).

Alternatively use your Gemfile for dependency management.

SYNOPSIS¶ ↑

You can use Tokenizer in two ways.

  • As a command line tool:

    $ echo 'Hi, ich gehe in die Schule!. | tokenize
  • As a library for embedded tokenization:

    > require 'tokenizer'
    > de_tokenizer = Tokenizer::WhitespaceTokenizer.new
    > de_tokenizer.tokenize('Ich gehe in die Schule!')
    > => ["Ich", "gehe", "in", "die", "Schule", "!"]
  • Customizable PRE and POST list

    > require 'tokenizer'
    > de_tokenizer = Tokenizer::WhitespaceTokenizer.new(:de, { post: Tokenizer::Tokenizer::POST + ['|'] })
    > de_tokenizer.tokenize('Ich gehe|in die Schule!')
    > => ["Ich", "gehe", "|in", "die", "Schule", "!"]

See documentation in the Tokenizer::WhitespaceTokenizer class for details on particular methods.

SUPPORT¶ ↑

If you have question, bug reports or any suggestions, please drop me an email :) Any help is deeply appreciated!

CHANGELOG¶ ↑

For details on future plan and working progress see CHANGELOG.rdoc.

CAUTION¶ ↑

This library is work in process! Though the interface is mostly complete, you might face some not implemented features.

Please contact me with your suggestions, bug reports and feature requests.

LICENSE¶ ↑

Tokenizer is a copyrighted software by Andrei Beliankou, 2011-

You may use, redistribute and change it under the terms provided in the LICENSE.rdoc file.