Categories
No matching categories were found
0.3
TimeDifference is the missing Ruby method to calculate difference between two given time. You can do a Ruby time difference in year, month, week, day, hour, minute, and seconds.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.0
Calculate the difference in time relative to now. Returns readable metrics (e.g. years_ago, days_ago, etc.)
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
0.0
A Rails gem that provides helper methods to display time differences in a human-readable format (e.g., '2 hours ago', '3 days later'). Features include auto-updating timestamps every minute without page reload, no JavaScript imports required, and seamless integration with Rails views. Supports various time units from seconds to years.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
0.08
PORO to hold a monotonic tick count. Useful for measuring time differences.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.02
A script to check a given Consul key EPOCH for
    freshness. Good for monitoring cron jobs or batch jobs. Have the last step
    of the job post the EPOCH time to target Consul key. This script will monitor
    it for a given freshness value (difference in time now to posted EPOCH)
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.04
Profile web applications by noting differences in response times based on input values
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.01
Handles time differences.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.0
Making time difference calculations fun
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
0.0
Calculates time difference within 24 hour limit.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.0
Thes gem calculate the difference between two date time values. The output depends on the input given.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
0.11
It returns a hash file with the difference in terms of year, month, week, day, hour, minute and second
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.0
Solves a quirk of rspec --profile in some code bases: result vary with every random spec ordering. This seems to be due to differences in dependency load order, class initialization, and test server startup. This lib runs rspec --profile many times, averaging the results to always give the same (stable) and meaningful result.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.0
README
======
This is a simple API to evaluate information retrieval results. It allows you to load ranked and unranked query results and calculate various evaluation metrics (precision, recall, MAP, kappa) against a previously loaded gold standard.
Start this program from the command line with:
    retreval -l <gold-standard-file> -q <query-results> -f <format> -o <output-prefix>
The options are outlined when you pass no arguments and just call
    retreval
You will find further information in the RDOC documentation and the HOWTO section below.
If you want to see an example, use this command:
    retreval -l example/gold_standard.yml -q example/query_results.yml -f yaml -v
INSTALLATION
============
If you have RubyGems, just run
    gem install retreval
You can manually download the sources and build the Gem from there by `cd`ing to the folder where this README is saved and calling
    gem build retreval.gemspec
This will create a gem file called which you just have to install with `gem install <file>` and you're done.
HOWTO
=====
This API supports the following evaluation tasks:
- Loading a Gold Standard that takes a set of documents, queries and corresponding judgements of relevancy (i.e. "Is this document relevant for this query?")
- Calculation of the _kappa measure_ for the given gold standard
- Loading ranked or unranked query results for a certain query
- Calculation of _precision_ and _recall_ for each result
- Calculation of the _F-measure_ for weighing precision and recall
- Calculation of _mean average precision_ for multiple query results
- Calculation of the _11-point precision_ and _average precision_ for ranked query results
- Printing of summary tables and results
Typically, you will want to use this Gem either standalone or within another application's context.
Standalone Usage
================
Call parameters
---------------
After installing the Gem (see INSTALLATION), you can always call `retreval` from the commandline. The typical call is:
    retreval -l <gold-standard-file> -q <query-results> -f <format> -o <output-prefix>
Where you have to define the following options:
- `gold-standard-file` is a file in a specified format that includes all the judgements
- `query-results` is a file in a specified format that includes all the query results in a single file
- `format` is the format that the files will use (either "yaml" or "plain")
- `output-prefix` is the prefix of output files that will be created
Formats
-------
Right now, we focus on the formats you can use to load data into the API. Currently, we support YAML files that must adhere to a special syntax. So, in order to load a gold standard, we need a file in the following format:
 * "query"       denotes the query
 * "documents"   these are the documents judged for this query
 * "id"          the ID of the document (e.g. its filename, etc.)
 * "judgements"  an array of judgements, each one with:
 * "relevant"    a boolean value of the judgment (relevant or not)
 * "user"        an optional identifier of the user
Example file, with one query, two documents, and one judgement:
        - query: 12th air force germany 1957
          documents:
          - id: g5701s.ict21311
            judgements: []
          - id: g5701s.ict21313
            judgements: 
            - relevant: false
              user: 2
So, when calling the program, specify the format as `yaml`.
For the query results, a similar format is used. Note that it is necessary to specify whether the result sets are ranked or not, as this will heavily influence the calculations. You can specify the score for a document. By "score" we mean the score that your retrieval algorithm has given the document. But this is not necessary. The documents will always be ranked in the order of their appearance, regardless of their score. Thus in the following example, the document with "07" at the end is the first and "25" is the last, regardless of the score.
        ---
        query: 12th air force germany 1957
        ranked: true
        documents:
        -   score: 0.44034874
            document: g5701s.ict21307
        -   score: 0.44034874
            document: g5701s.ict21309
        -   score: 0.44034874
            document: g5701s.ict21311
        -   score: 0.44034874
            document: g5701s.ict21313
        -   score: 0.44034874
            document: g5701s.ict21315
        -   score: 0.44034874
            document: g5701s.ict21317
        -   score: 0.44034874
            document: g5701s.ict21319
        -   score: 0.44034874
            document: g5701s.ict21321
        -   score: 0.44034874
            document: g5701s.ict21323
        -   score: 0.44034874
            document: g5701s.ict21325
        ---
        query: 1612
        ranked: true
        documents:
        -   score: 1.0174774
            document: g3290.np000144
        -   score: 0.763108
            document: g3201b.ct000726
        -   score: 0.763108
            document: g3400.ct000886
        -   score: 0.6359234
            document: g3201s.ct000130
        ---
**Note**: You can also use the `plain` format, which will load the gold standard in a different way (but not the results):
        my_query        my_document_1     false
        my_query        my_document_2     true
See that every query/document/relevancy pair is separated by a tabulator? You can also add the user's ID in the fourth column if necessary.
Running the evaluation
-----------------------
After you have specified the input files and the format, you can run the program. If needed, the `-v` switch will turn on verbose messages, such as information on how many judgements, documents and users there are, but this shouldn't be necessary.
The program will first load the gold standard and then calculate the statistics for each result set. The output files are automatically created and contain a YAML representation of the results.
Calculations may take a while depending on the amount of judgements and documents. If there are a thousand judgements, always consider a few seconds for each result set.
Interpreting the output files
------------------------------
Two output files will be created:
- `output_avg_precision.yml`
- `output_statistics.yml`
The first lists the average precision for each query in the query result file. The second file lists all supported statistics for each query in the query results file.
For example, for a ranked evaluation, the first two entries of such a query result statistic look like this:
        --- 
        12th air force germany 1957: 
        - :precision: 0.0
          :recall: 0.0
          :false_negatives: 1
          :false_positives: 1
          :true_negatives: 2516
          :true_positives: 0
          :document: g5701s.ict21313
          :relevant: false
        - :precision: 0.0
          :recall: 0.0
          :false_negatives: 1
          :false_positives: 2
          :true_negatives: 2515
          :true_positives: 0
          :document: g5701s.ict21317
          :relevant: false
You can see the precision and recall for that specific point and also the number of documents for the contingency table (true/false positives/negatives). Also, the document identifier is given.
API Usage
=========
Using this API in another ruby application is probably the more common use case. All you have to do is include the Gem in your Ruby or Ruby on Rails application. For details about available methods, please refer to the API documentation generated by RDoc.
**Important**: For this implementation, we use the document ID, the query and the user ID as the primary keys for matching objects. This means that your documents and queries are identified by a string and thus the strings should be sanitized first.
Loading the Gold Standard
-------------------------
Once you have loaded the Gem, you will probably start by creating a new gold standard.
    gold_standard = GoldStandard.new
Then, you can load judgements into this standard, either from a file, or manually:
    gold_standard.load_from_yaml_file "my-file.yml"
    gold_standard.add_judgement :document => doc_id, :query => query_string, :relevant => boolean, :user => John
There is a nice shortcut for the `add_judgement` method. Both lines are essentially the same:
    gold_standard.add_judgement :document => doc_id, :query => query_string, :relevant => boolean, :user => John
    gold_standard << :document => doc_id, :query => query_string, :relevant => boolean, :user => John
Note the usage of typical Rails hashes for better readability (also, this Gem was developed to be used in a Rails webapp).
Now that you have loaded the gold standard, you can do things like:
        gold_standard.contains_judgement? :document => "a document", :query => "the query"
        gold_standard.relevant? :document => "a document", :query => "the query"
Loading the Query Results
-------------------------
Now we want to create a new `QueryResultSet`. A query result set can contain more than one result, which is what we normally want. It is important that you specify the gold standard it belongs to.
    query_result_set = QueryResultSet.new :gold_standard => gold_standard
Just like the Gold Standard, you can read a query result set from a file:
    query_result_set.load_from_yaml_file "my-results-file.yml"
Alternatively, you can load the query results one by one. To do this, you have to create the results (either ranked or unranked) and then add documents:
    my_result = RankedQueryResult.new :query => "the query"
    my_result.add_document :document => "test_document 1", :score => 13
    my_result.add_document :document => "test_document 2", :score => 11
    my_result.add_document :document => "test_document 3", :score => 3
This result would be ranked, obviously, and contain three documents. Documents can have a score, but this is optional. You can also create an Array of documents first and add them altogether:
        documents = Array.new
        documents << ResultDocument.new :id => "test_document 1", :score => 20
        documents << ResultDocument.new :id => "test_document 2", :score => 21
        my_result = RankedQueryResult.new :query => "the query", :documents => documents
The same applies to `UnrankedQueryResult`s, obviously. The order of ranked documents is the same as the order in which they were added to the result.
The `QueryResultSet` will now contain all the results. They are stored in an array called `query_results`, which you can access. So, to iterate over each result, you might want to use the following code:
        query_result_set.query_results.each_with_index do |result, index|
        # ...
        end
Or, more simply:
        for result in query_result_set.query_results
        # ...
        end
Calculating statistics
----------------------
Now to the interesting part: Calculating statistics. As mentioned before, there is a conceptual difference between ranked and unranked results. Unranked results are much easier to calculate and thus take less CPU time.
No matter if unranked or ranked, you can get the most important statistics by just calling the `statistics` method.
        statistics = my_result.statistics
In the simple case of an unranked result, you will receive a hash with the following information:
* `precision` - the precision of the results
* `recall` - the recall of the results
* `false_negatives` - number of not retrieved but relevant items
* `false_positives` - number of retrieved but nonrelevant
* `true_negatives` - number of not retrieved and nonrelevantv items
* `true_positives` - number of retrieved and relevant items
In case of a ranked result, you will receive an Array that consists of _n_ such Hashes, depending on the number of documents. Each Hash will give you the information at a certain rank, e.g. the following to lines return the recall at the fourth rank. 
        statistics = my_ranked_result.statistics
        statistics[3][:recall]
In addition to the information mentioned above, you can also get for each rank:
* `document` - the ID of the document that was returned at this rank
* `relevant` - whether the document was relevant or not
Calculating statistics with missing judgements
----------------------------------------------
Sometimes, you don't have judgements for all document/query pairs in the gold standard. If this happens, the results will be cleaned up first. This means that every document in the results that doesn't appear to have a judgement will be removed temporarily.
As an example, take the following results:
* A
* B
* C
* D
Our gold standard only contains judgements for A and C. The results will be cleaned up first, thus leading to:
* A
* C
With this approach, we can still provide meaningful results (for precision and recall).
Other statistics
----------------
There are several other statistics that can be calculated, for example the **F measure**. The F measure weighs precision and recall and has one parameter, either "alpha" or "beta". Get the F measure like so:
        my_result.f_measure :beta => 1
If you don't specify either alpha or beta, we will assume that beta = 1.
Another interesting measure is **Cohen's Kappa**, which tells us about the inter-agreement of assessors. Get the kappa statistic like this:
        gold_standard.kappa
This will calculate the average kappa for each pairwise combination of users in the gold standard.
For ranked results one might also want to calculate an **11-point precision**. Just call the following:
        my_ranked_result.eleven_point_precision
This will return a Hash that has indices at the 11 recall levels from 0 to 1 (with steps of 0.1) and the corresponding precision at that recall level.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
0.0
A class that wraps the Time class and makes it easy to work with most
known time values, including various time strings, automatically
converting them to Time values, and perform tolerant comparisons.
Several time classes, and the String class, are extended with the
".easy_time" method to perform an auto-conversion.  A tolerant comparison
allows for times from differing systems to be compared, even when the
systems are out of sync, using the relationship operators and methods
like "newer?", "older?", "same?" and "between?".  A tolerant comparison
for equality is where the difference of two values is less than the
tolerance value (1 minute by default).  The tolerance can be configured,
even set to zero.  Finally, all of the Time class and instance methods
are available on the EasyTime class and instances.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.0
VectorNumber provides a Numeric-like experience for doing arithmetics on heterogeneous objects, with more advanced operations based on real vector spaces available when needed.
Features:
- Add and subtract (almost) any object, with no setup or declaration.
- Multiply and divide vectors by any real number to create 1.35 of an array and -2 of a string. What does that mean? Only you know!
- Use vectors instead of inbuilt numbers in most situtations with no difference in behavior. Or, use familiar methods from numerics with sane semantics!
- Enumerate vectors in a hash-like fashion, or transform to an array or hash as needed.
- Enjoy a mix of vector-, complex- and polynomial-like behavior at appropriate times.
- No dependencies, no extensions. It just works!
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.0
== ICU4R - ICU Unicode bindings for Ruby
ICU4R is an attempt to provide better Unicode support for Ruby,
where it lacks for a long time.
Current code is mostly rewritten  string.c from Ruby 1.8.3.
ICU4R is Ruby C-extension binding for ICU library[1] 
and provides following classes and functionality:
* UString:
    - String-like class with internal UTF16 storage;
    - UCA rules for UString comparisons (<=>, casecmp);
    - encoding(codepage) conversion;
 \   - Unicode normalization;
    - transliteration, also rule-based;
    Bunch of locale-sensitive functions:
    - upcase/downcase;
    - string collation;
 \   - string search;
    - iterators over text line/word/char/sentence breaks;
 \   - message formatting (number/currency/string/time);
    - date and number parsing.
* URegexp - unicode regular expressions.
* UResourceBundle - access to resource bundles, including ICU locale data.
* UCalendar - date manipulation and timezone info.
* UConverter - codepage conversions API
* UCollator - locale-sensitive string comparison
== Install and usage
   > ruby extconf.rb
   > make && make check
   > make install
Now, in your scripts just require 'icu4r'.
To create RDoc, run 
   > sh tools/doc.sh
== Requirements
To build and use ICU4R you will need GCC and ICU v3.4 libraries[2].
== Differences from Ruby String and Regexp classes
=== UString vs String
1. UString substring/index  methods use UTF16 codeunit indexes, not code points.
2. UString supports most methods from String class. Missing methods are:
        capitalize, capitalize!, swapcase, swapcase!
        %, center, ljust, rjust
        chomp, chomp!, chop, chop!
 \       count, delete, delete!, squeeze, squeeze!, tr, tr!, tr_s, tr_s!
        crypt, intern, sum, unpack
        dump, each_byte, each_line
        hex, oct, to_i, to_sym
        reverse, reverse!
        succ, succ!, next, next!, upto
        
3. Instead of String#% method, UString#format is provided. See FORMATTING for short reference.
4. UStrings can be created via String.to_u(encoding='utf8') or global u(str,[encoding='utf8'])
   calls. Note that +encoding+ parameter must be value of String class. 
5. There's difference between character grapheme, codepoint and codeunit. See UNICODE reports for
   gory details, but in short: locale dependent notion of character can be presented using 
   more than one codepoint - base letter and combining (accents) (also possible more than one!), and
   each codepoint can require more than one codeunit to store (for UTF8 codeunit size is 8bit, though
 \  some codepoints require up to 4bytes). So, UString has normalization and locale dependent break
   iterators.
	
6. Currently UString doesn't include Enumerable module.
7. UString index/[] methods which accept URegexp, throw exception if Regexp passed.
8. UString#<=>, UString#casecmp use UCA rules.
=== URegexp
UString uses ICU regexp library. Pattern syntax is described in [./docs/UNICODE_REGEXPS] and ICU docs.
There are some differences between processing in Ruby Regexp and URegexp:
1. When UString#sub, UString#gsub are called with block, special vars ($~, $&, $1, ...) aren't
   set, as their values are processed through deep ruby core code. Instead, block receives UMatch object,
   which is essentially immutable array of matching groups:
        "test".u.gsub(ure("(e)(.)")) do |match| 
 \          puts match[0]  # => 'es' <--> $&
           puts match[1]  # => 'e' \ <--> $1
           puts match[2]  # => 's'  <--> $2
        end
2. In URegexp search pattern backreferences are in form \n (\1, \2, ...), 
   in replacement string - in form $1, $2, ...
   NOTE: URegexp considers char to be a digit NOT ONLY ASCII (0x0030-0x0039), but 
   any Unicode char, which has property Decimal digit number (Nd), e.g.:
        a = [?$, 0x1D7D9].pack("U*").u * 2
        puts a.inspect_names
        <U000024>DOLLAR SIGN
        <U01D7D9>MATHEMATICAL DOUBLE-STRUCK DIGIT ONE
        <U000024>DOLLAR SIGN
        <U01D7D9>MATHEMATICAL DOUBLE-STRUCK DIGIT ONE
        puts "abracadabra".u.gsub(/(b)/.U, a)
        abbracadabbra
 \   
3. One can create URegexp using global Kernel#ure function, Regexp#U, Regexp#to_u, or
   from UString using URegexp.new, e.g:
      /pattern/.U =~ "string".u
4. There are differences about Regexp and URegexp multiline matching options:
      t = "text\ntest"
      # ^,$ handling : URegexp multiline <-> Ruby default
      t.u =~ ure('^\w+$', URegexp::MULTILINE)
      => #<UMatch:0xf6f7de04 @ranges=[0..3], @cg=[\u0074\u0065\u0078\u0074]>
      t =~ /^\w+$/
      => 0
      # . matches \n : URegexp DOTALL <-> /m
      t.u =~ ure('.+test', URegexp::DOTALL)
 \     => #<UMatch:0xf6fa4d88 ...
      t.u =~ /.+test/m
5. UMatch.range(idx) returns range for capturing group idx. This range is in codeunits.
=== References
1. ICU Official Homepage http://ibm.com/software/globalization/icu/ 
2. ICU downloads \ http://ibm.com/software/globalization/icu/downloads.jsp
3. ICU Home Page http://icu.sf.net 
4. Unicode Home Page http://www.unicode.org
==== BUGS, DOCS, TO DO
The code is slow and inefficient yet, is still highly experimental, 
so can have many security and memory leaks, bugs, inconsistent 
documentation, incomplete test suite. Use it at your own risk.
Bug reports and feature requests are welcome :)
===  Copying
This extension module is copyrighted free software by Nikolai Lugovoi.
You can redistribute it and/or modify it under the terms of MIT License.
Nikolai Lugovoi <meadow.nnick@gmail.com>
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
0.0
Deliver all master files managed in a single master snapshot directory
into the specified directory while maintaining the hierarchy of the
master snapshot directory. If the destination file already exists,
back it up first and then deliver the master file.
The difference with rsync is that master_delivery creates a symlinks
instead of copying the master files. They are symlinks, so you have to
keep in mind that you have to keep the master files in the same location,
but it also has the advantage that the master file is updated at the same
time when you directly make changes to the delivered file.
Do you have any experience that the master file is getting old gradually?
master_delivery can prevent this.
If the master directory is git or svn managed, you can manage revisions
of files that are delivered here and there at once with commands
like git diff and git commit.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.0
The affixapi.com API documentation.  # Introduction Affix API is an OAuth 2.1 application that allows developers to access customer data, without developers needing to manage or maintain integrations; or collect login credentials or API keys from users for these third party systems.  # OAuth 2.1 Affix API follows the [OAuth 2.1 spec](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-08).  As an OAuth application, Affix API handles not only both the collection of sensitive user credentials or API keys, but also builds and maintains the integrations with the providers, so you don't have to.  # How to obtain an access token in order to get started, you must:   - register a `client_id`   - direct your user to the sign in flow  (`https://connect.affixapi.com`     [with the appropriate query     parameters](https://github.com/affixapi/starter-kit/tree/master/connect))   - capture `authorization_code` we will send to your redirect URI after     the sign in flow is complete and exchange that `authorization_code` for     a Bearer token  # Sandbox keys (developer mode) ### dev ``` eyJhbGciOiJFUzI1NiIsImtpZCI6Ims5RmxwSFR1YklmZWNsUU5QRVZzeFcxazFZZ0Zfbk1BWllOSGVuOFQxdGciLCJ0eXAiOiJKV1MifQ.eyJwcm92aWRlciI6InNhbmRib3giLCJzY29wZXMiOlsiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL2NvbXBhbnkiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvZW1wbG95ZWUiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvZW1wbG95ZWVzIiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL2lkZW50aXR5IiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL3BheXJ1bnMiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvcGF5cnVucy86cGF5cnVuX2lkIiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL3RpbWUtb2ZmLWJhbGFuY2VzIiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL3RpbWUtb2ZmLWVudHJpZXMiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvdGltZXNoZWV0cyJdLCJ0b2tlbiI6ImQ1OTZhMmYzLWYzNzktNGE1ZC1hMmRhLTk4OWJmYWViYTg1ZCIsImlhdCI6MTcwMjkyMDkwMywiaXNzIjoicHVibGljYXBpLWludGVybWVkaWF0ZS5kZXYuZW5naW5lZXJpbmcuYWZmaXhhcGkuY29tIiwic3ViIjoiZGV2ZWxvcGVyIiwiYXVkIjoiM0ZEQUVERjktMURDQTRGNTQtODc5NDlGNkEtNDEwMjc2NDMifQ.VLWYjCQvBS0C3ZA6_J3-U-idZj5EYI2IlDdTjAWBxSIHGufp6cqaVodKsF2BeIqcIeB3P0lW-KL9mY3xGd7ckQ ```  #### `employees` endpoint sample: ``` curl --fail \   -X GET \   -H 'Authorization: Bearer eyJhbGciOiJFUzI1NiIsImtpZCI6Ims5RmxwSFR1YklmZWNsUU5QRVZzeFcxazFZZ0Zfbk1BWllOSGVuOFQxdGciLCJ0eXAiOiJKV1MifQ.eyJwcm92aWRlciI6InNhbmRib3giLCJzY29wZXMiOlsiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL2NvbXBhbnkiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvZW1wbG95ZWUiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvZW1wbG95ZWVzIiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL2lkZW50aXR5IiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL3BheXJ1bnMiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvcGF5cnVucy86cGF5cnVuX2lkIiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL3RpbWUtb2ZmLWJhbGFuY2VzIiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL3RpbWUtb2ZmLWVudHJpZXMiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvdGltZXNoZWV0cyJdLCJ0b2tlbiI6ImQ1OTZhMmYzLWYzNzktNGE1ZC1hMmRhLTk4OWJmYWViYTg1ZCIsImlhdCI6MTcwMjkyMDkwMywiaXNzIjoicHVibGljYXBpLWludGVybWVkaWF0ZS5kZXYuZW5naW5lZXJpbmcuYWZmaXhhcGkuY29tIiwic3ViIjoiZGV2ZWxvcGVyIiwiYXVkIjoiM0ZEQUVERjktMURDQTRGNTQtODc5NDlGNkEtNDEwMjc2NDMifQ.VLWYjCQvBS0C3ZA6_J3-U-idZj5EYI2IlDdTjAWBxSIHGufp6cqaVodKsF2BeIqcIeB3P0lW-KL9mY3xGd7ckQ' \   'https://dev.api.affixapi.com/2023-03-01/developer/employees' ```  ### prod ``` eyJhbGciOiJFUzI1NiIsImtpZCI6Ims5RmxwSFR1YklmZWNsUU5QRVZzeFcxazFZZ0Zfbk1BWllOSGVuOFQxdGciLCJ0eXAiOiJKV1MifQ.eyJwcm92aWRlciI6InNhbmRib3giLCJzY29wZXMiOlsiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL2NvbXBhbnkiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvZW1wbG95ZWUiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvZW1wbG95ZWVzIiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL2lkZW50aXR5IiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL3BheXJ1bnMiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvcGF5cnVucy86cGF5cnVuX2lkIiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL3RpbWUtb2ZmLWJhbGFuY2VzIiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL3RpbWUtb2ZmLWVudHJpZXMiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvdGltZXNoZWV0cyJdLCJ0b2tlbiI6IjI5YjFjYTg4LWNlNjktNDgyZC1iNGZjLTkzMWMzZmJkYWM4ZSIsImlhdCI6MTcwMjkyMTA4MywiaXNzIjoicHVibGljYXBpLWludGVybWVkaWF0ZS5wcm9kLmVuZ2luZWVyaW5nLmFmZml4YXBpLmNvbSIsInN1YiI6ImRldmVsb3BlciIsImF1ZCI6IjA4QkIwODFFLUQ5QUI0RDE0LThERjk5MjMzLTY2NjE1Q0U5In0.2zdpFAmiyYiYk6MOcbXNUwwR4M1Fextnaac340x54AidiWXCyw-u9KeavbqfYF6q8a9kcDLrxhJ8Wc_3tIzuVw ```  #### `employees` endpoint sample: ``` curl --fail \   -X GET \   -H 'Authorization: Bearer eyJhbGciOiJFUzI1NiIsImtpZCI6Ims5RmxwSFR1YklmZWNsUU5QRVZzeFcxazFZZ0Zfbk1BWllOSGVuOFQxdGciLCJ0eXAiOiJKV1MifQ.eyJwcm92aWRlciI6InNhbmRib3giLCJzY29wZXMiOlsiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL2NvbXBhbnkiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvZW1wbG95ZWUiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvZW1wbG95ZWVzIiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL2lkZW50aXR5IiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL3BheXJ1bnMiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvcGF5cnVucy86cGF5cnVuX2lkIiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL3RpbWUtb2ZmLWJhbGFuY2VzIiwiLzIwMjMtMDMtMDEvZGV2ZWxvcGVyL3RpbWUtb2ZmLWVudHJpZXMiLCIvMjAyMy0wMy0wMS9kZXZlbG9wZXIvdGltZXNoZWV0cyJdLCJ0b2tlbiI6IjI5YjFjYTg4LWNlNjktNDgyZC1iNGZjLTkzMWMzZmJkYWM4ZSIsImlhdCI6MTcwMjkyMTA4MywiaXNzIjoicHVibGljYXBpLWludGVybWVkaWF0ZS5wcm9kLmVuZ2luZWVyaW5nLmFmZml4YXBpLmNvbSIsInN1YiI6ImRldmVsb3BlciIsImF1ZCI6IjA4QkIwODFFLUQ5QUI0RDE0LThERjk5MjMzLTY2NjE1Q0U5In0.2zdpFAmiyYiYk6MOcbXNUwwR4M1Fextnaac340x54AidiWXCyw-u9KeavbqfYF6q8a9kcDLrxhJ8Wc_3tIzuVw' \   'https://api.affixapi.com/2023-03-01/developer/employees' ```  # Webhooks An exciting feature for HR/Payroll modes are webhooks.  If enabled, your `webhook_uri` is set on your `client_id` for the respective environment: `dev | prod`  Webhooks are configured to make live requests to the underlying integration 1x/hr, and if a difference is detected since the last request, we will send a request to your `webhook_uri` with this shape:  ``` {    added: <api.v20230301.Employees>[     <api.v20230301.Employee>{       ...,       date_of_birth: '2010-08-06',       display_full_name: 'Daija Rogahn',       employee_number: '57993',       employment_status: 'pending',       employment_type: 'other',       employments: [         {           currency: 'eur',           effective_date: '2022-02-25',           employment_type: 'other',           job_title: 'Dynamic Implementation Manager',           pay_frequency: 'semimonthly',           pay_period: 'YEAR',           pay_rate: 96000,         },       ],       first_name: 'Daija',       ...     }   ],   removed: [],   updated: [     <api.v20230301.Employee>{       ...,       date_of_birth: '2009-11-09',       display_full_name: 'Lourdes Stiedemann',       employee_number: '63189',       employment_status: 'leave',       employment_type: 'full_time',       employments: [         {           currency: 'gbp',           effective_date: '2023-01-16',           employment_type: 'full_time',           job_title: 'Forward Brand Planner',           pay_frequency: 'semimonthly',           pay_period: 'YEAR',           pay_rate: 86000,         },       ],       first_name: 'Lourdes',     }   ] } ```  the following headers will be sent with webhook requests:  ``` x-affix-api-signature: ab8474e609db95d5df3adc39ea3add7a7544bd215c5c520a30a650ae93a2fba7  x-affix-api-origin:  webhooks-employees-webhook  user-agent:  affixapi.com ```  Before trusting the payload, you should sign the payload and verify the signature matches the signature sent by the `affixapi.com` service.  This secures that the data sent to your `webhook_uri` is from the `affixapi.com` server.  The signature is created by combining the signing secret (your `client_secret`) with the body of the request sent using a standard HMAC-SHA256 keyed hash.  The signature can be created via:   - create an `HMAC` with your `client_secret`   - update the `HMAC` with the payload   - get the hex digest -> this is the signature  Sample `typescript` code that follows this recipe:  ``` import { createHmac } from 'crypto';  export const computeSignature = ({   str,   signingSecret, }: {   signingSecret: string;   str: string; }): string => {   const hmac = createHmac('sha256', signingSecret);   hmac.update(str);   const signature = hmac.digest('hex');    return signature; }; ```  ## Rate limits Open endpoints (not gated by an API key) (applied at endpoint level):   - 15 requests every 1 minute (by IP address)   - 25 requests every 5 minutes (by IP address)  Gated endpoints (require an API key) (applied at endpoint level):   - 40 requests every 1 minute (by IP address)   - 40 requests every 5 minutes (by `client_id`)  Things to keep in mind:   - Open endpoints (not gated by an API key) will likely be called by your     users, not you, so rate limits generally would not apply to you.   - As a developer, rate limits are applied at the endpoint granularity.     - For example, say the rate limits below are 10 requests per minute by ip.       from that same ip, within 1 minute, you get:       - 10 requests per minute on `/orders`,       - another 10 requests per minute on `/items`,       - and another 10 requests per minute on `/identity`,       - for a total of 30 requests per minute. 
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.0
                                Lookout
  Lookout is a unit testing framework for Ruby┬╣ that puts your results in
  focus. Tests (expectations) are written as follows
    expect 2 do
      1 + 1
    end
    expect ArgumentError do
      Integer('1 + 1')
    end
    expect Array do
      [1, 2, 3].select{ |i| i % 2 == 0 }
    end
    expect [2, 4, 6] do
      [1, 2, 3].map{ |i| i * 2 }
    end
  Lookout is designed to encourage ΓÇô force, even ΓÇô unit testing best practices
  such as
ΓÇó   Setting up only one expectation per test
ΓÇó   Not setting expectations on non-public APIs
ΓÇó   Test isolation
  This is done by
ΓÇó   Only allowing one expectation to be set per test
ΓÇó   Providing no (additional) way of accessing private state
ΓÇó   Providing no setup and tear-down methods, nor a method of providing test
    helpers
  Other important points are
ΓÇó   Putting the expected outcome of a test in focus with the steps of the
    calculation of the actual result only as a secondary concern
ΓÇó   A focus on code readability by providing no mechanism for describing an
    expectation other than the code in the expectation itself
ΓÇó   A unified syntax for setting up both state-based and behavior-based
    expectations
  The way Lookout works has been heavily influenced by expectations┬▓, by
  {Jay Fields}┬│.  The code base was once also heavily based on expectations,
  based at Subversion {revision 76}⁴.  A lot has happened since then and all of
  the work past that revision are due to {Nikolai Weibull}⁵.
┬╣ Ruby: http://ruby-lang.org/
┬▓ Expectations: http://expectations.rubyforge.org/
┬│ Jay FieldsΓÇÖs blog: http://blog.jayfields.com/
⁴ Lookout revision 76:
  https://github.com/now/lookout/commit/537bedf3e5b3eb4b31c066b3266f42964ac35ebe
⁵ Nikolai Weibull’s home page: http://disu.se/
§ Installation
    Install Lookout with
      % gem install lookout
§ Usage
    Lookout allows you to set expectations on an objectΓÇÖs state or behavior.
    WeΓÇÖll begin by looking at state expectations and then take a look at
    expectations on behavior.
  § Expectations on State: Literals
      An expectation can be made on the result of a computation:
        expect 2 do
          1 + 1
        end
      Most objects, in fact, have their state expectations checked by invoking
      ‹#==› on the expected value with the result as its argument.
      Checking that a result is within a given range is also simple:
        expect 0.099..0.101 do
          0.4 - 0.3
        end
      Here, the more general ‹#===› is being used on the ‹Range›.
  § Regexps
      ‹Strings› of course match against ‹Strings›:
        expect 'ab' do
          'abc'[0..1]
        end
      but we can also match a ‹String› against a ‹Regexp›:
        expect %r{a substring} do
          'a string with a substring'
        end
      (Note the use of ‹%r{…}› to avoid warnings that will be generated when
      Ruby parses ‹expect /…/›.)
  § Modules
      Checking that the result includes a certain module is done by expecting the
      ‹Module›.
        expect Enumerable do
          []
        end
      This, due to the nature of Ruby, of course also works for classes (as
      they are also modules):
        expect String do
          'a string'
        end
      This doesn’t hinder us from expecting the actual ‹Module› itself:
        expect Enumerable do
          Enumerable
        end
      or the ‹Class›:
        expect String do
          String
        end
      for obvious reasons.
      As you may have figured out yourself, this is accomplished by first
      trying ‹#==› and, if it returns ‹false›, then trying ‹#===› on the
      expected ‹Module›.  This is also true of ‹Ranges› and ‹Regexps›.
  § Booleans
      Truthfulness is expected with ‹true› and ‹false›:
        expect true do
          1
        end
        expect false do
          nil
        end
      Results equaling ‹true› or ‹false› are slightly different:
        expect TrueClass do
          true
        end
        expect FalseClass do
          false
        end
      The rationale for this is that you should only care if the result of a
      computation evaluates to a value that Ruby considers to be either true or
      false, not the exact literals ‹true› or ‹false›.
  § IO
      Expecting output on an IO object is also common:
        expect output("abc\ndef\n") do |io|
          io.puts 'abc', 'def'
        end
      This can be used to capture the output of a formatter that takes an
      output object as a parameter.
  § Warnings
      Expecting warnings from code isnΓÇÖt very common, but should be done:
        expect warning('this is your final one!') do
          warn 'this is your final one!'
        end
        expect warning('this is your final one!') do
          warn '%s:%d: warning: this is your final one!' % [__FILE__, __LINE__]
        end
      ‹$VERBOSE› is set to ‹true› during the execution of the block, so you
      donΓÇÖt need to do so yourself.  If you have other code that depends on the
      value of $VERBOSE, that can be done with ‹#with_verbose›
        expect nil do
          with_verbose nil do
            $VERBOSE
          end
        end
  § Errors
      You should always be expecting errors from ΓÇô and in, but thatΓÇÖs a
      different story ΓÇô your code:
        expect ArgumentError do
          Integer('1 + 1')
        end
      Often, not only the type of the error, but its description, is important
      to check:
        expect StandardError.new('message') do
          raise StandardError.new('message')
        end
      As with ‹Strings›, ‹Regexps› can be used to check the error description:
        expect StandardError.new(/mess/) do
          raise StandardError.new('message')
        end
  § Queries Through Symbols
      Symbols are generally matched against symbols, but as a special case,
      symbols ending with ‹?› are seen as expectations on the result of query
      methods on the result of the block, given that the method is of zero
      arity and that the result isnΓÇÖt a Symbol itself.  Simply expect a symbol
      ending with ‹?›:
        expect :empty? do
          []
        end
      To expect it’s negation, expect the same symbol beginning with ‹not_›:
        expect :not_nil? do
          [1, 2, 3]
        end
      This is the same as
        expect true do
          [].empty?
        end
      and
        expect false do
          [1, 2, 3].empty?
        end
      but provides much clearer failure messages.  It also makes the
      expectationΓÇÖs intent a lot clearer.
  § Queries By Proxy
      ThereΓÇÖs also a way to make the expectations of query methods explicit by
      invoking methods on the result of the block.  For example, to check that
      the even elements of the Array ‹[1, 2, 3]› include ‹1› you could write
        expect result.to.include? 1 do
          [1, 2, 3].reject{ |e| e.even? }
        end
      You could likewise check that the result doesnΓÇÖt include 2:
        expect result.not.to.include? 2 do
          [1, 2, 3].reject{ |e| e.even? }
        end
      This is the same as (and executes a little bit slower than) writing
        expect false do
          [1, 2, 3].reject{ |e| e.even? }.include? 2
        end
      but provides much clearer failure messages.  Given that these two last
      examples would fail, youΓÇÖd get a message saying ΓÇ£[1, 2, 3]#include?(2)ΓÇ¥
      instead of the terser ΓÇ£trueΓëáfalseΓÇ¥.  It also clearly separates the actual
      expectation from the set-up.
      The keyword for this kind of expectations is ‹result›.  This may be
      followed by any of the methods
    •   ‹#not›
    •   ‹#to›
    •   ‹#be›
    •   ‹#have›
      or any other method you will want to call on the result.  The methods
      ‹#to›, ‹#be›, and ‹#have› do nothing except improve readability.  The
      ‹#not› method inverts the expectation.
  § Literal Literals
      If you need to literally check against any of the types of objects
      otherwise treated specially, that is, any instances of
    •   ‹Module›
    •   ‹Range›
    •   ‹Regexp›
    •   ‹Exception›
    •   ‹Symbol›, given that it ends with ‹?›
      you can do so by wrapping it in ‹literal(…)›:
        expect literal(:empty?) do
          :empty?
        end
      You almost never need to do this, as, for all but symbols, instances will
      match accordingly as well.
  § Expectations on Behavior
      We expect our objects to be on their best behavior.  Lookout allows you
      to make sure that they are.
      Reception expectations let us verify that a method is called in the way
      that we expect it to be:
        expect mock.to.receive.to_str(without_arguments){ '123' } do |o|
          o.to_str
        end
      Here, ‹#mock› creates a mock object, an object that doesn’t respond to
      anything unless you tell it to.  We tell it to expect to receive a call
      to ‹#to_str› without arguments and have ‹#to_str› return ‹'123'› when
      called.  The mock object is then passed in to the block so that the
      expectations placed upon it can be fulfilled.
      Sometimes we only want to make sure that a method is called in the way
      that we expect it to be, but we donΓÇÖt care if any other methods are
      called on the object.  A stub object, created with ‹#stub›, expects any
      method and returns a stub object that, again, expects any method, and
      thus fits the bill.
        expect stub.to.receive.to_str(without_arguments){ '123' } do |o|
          o.to_str if o.convertable?
        end
      You donΓÇÖt have to use a mock object to verify that a method is called:
        expect Object.to.receive.name do
          Object.name
        end
      As you have figured out by now, the expected method call is set up by
      calling ‹#receive› after ‹#to›.  ‹#Receive› is followed by a call to the
      method to expect with any expected arguments.  The body of the expected
      method can be given as the block to the method.  Finally, an expected
      invocation count may follow the method.  LetΓÇÖs look at this formal
      specification in more detail.
      The expected method arguments may be given in a variety of ways.  LetΓÇÖs
      introduce them by giving some examples:
        expect mock.to.receive.a do |m|
          m.a
        end
      Here, the method ‹#a› must be called with any number of arguments.  It
      may be called any number of times, but it must be called at least once.
      If a method must receive exactly one argument, you can use ‹Object›, as
      the same matching rules apply for arguments as they do for state
      expectations:
        expect mock.to.receive.a(Object) do |m|
          m.a 0
        end
      If a method must receive a specific argument, you can use that argument:
        expect mock.to.receive.a(1..2) do |m|
          m.a 1
        end
      Again, the same matching rules apply for arguments as they do for state
      expectations, so the previous example expects a call to ‹#a› with 1, 2,
      or the Range 1..2 as an argument on ‹m›.
      If a method must be invoked without any arguments you can use
      ‹without_arguments›:
        expect mock.to.receive.a(without_arguments) do |m|
          m.a
        end
      You can of course use both ‹Object› and actual arguments:
        expect mock.to.receive.a(Object, 2, Object) do |m|
          m.a nil, 2, '3'
        end
      The body of the expected method may be given as the block.  Here, calling
      ‹#a› on ‹m› will give the result ‹1›:
        expect mock.to.receive.a{ 1 } do |m|
          raise 'not 1' unless m.a == 1
        end
      If no body has been given, the result will be a stub object.
      To take a block, grab a block parameter and ‹#call› it:
        expect mock.to.receive.a{ |&b| b.call(1) } do |m|
          j = 0
          m.a{ |i| j = i }
          raise 'not 1' unless j == 1
        end
      To simulate an ‹#each›-like method, ‹#call› the block several times.
      Invocation count expectations can be set if the default expectation of
      ΓÇ£at least onceΓÇ¥ isnΓÇÖt good enough.  The following expectations are
      possible
    •   ‹#at_most_once›
    •   ‹#once›
    •   ‹#at_least_once›
    •   ‹#twice›
      And, for a given ‹N›,
    •   ‹#at_most(N)›
    •   ‹#exactly(N)›
    •   ‹#at_least(N)›
  § Utilities: Stubs
      Method stubs are another useful thing to have in a unit testing
      framework.  Sometimes you need to override a method that does something a
      test shouldnΓÇÖt do, like access and alter bank accounts.  We can override
      – stub out – a method by using the ‹#stub› method.  Let’s assume that we
      have an ‹Account› class that has two methods, ‹#slips› and ‹#total›.
      ‹#Slips› retrieves the bank slips that keep track of your deposits to the
      ‹Account› from a database.  ‹#Total› sums the ‹#slips›.  In the following
      test we want to make sure that ‹#total› does what it should do without
      accessing the database.  We therefore stub out ‹#slips› and make it
      return something that we can easily control.
        expect 6 do |m|
          stub(Class.new{
                 def slips
                   raise 'database not available'
                 end
                 def total
                   slips.reduce(0){ |m, n| m.to_i + n.to_i }
                 end
               }.new, :slips => [1, 2, 3]){ |account| account.total }
        end
      To make it easy to create objects with a set of stubbed methods thereΓÇÖs
      also a convenience method:
        expect 3 do
          s = stub(:a => 1, :b => 2)
          s.a + s.b
        end
      This short-hand notation can also be used for the expected value:
        expect stub(:a => 1, :b => 2).to.receive.a do |o|
          o.a + o.b
        end
      and also works for mock objects:
        expect mock(:a => 2, :b => 2).to.receive.a do |o|
          o.a + o.b
        end
      Blocks are also allowed when defining stub methods:
        expect 3 do
          s = stub(:a => proc{ |a, b| a + b })
          s.a(1, 2)
        end
      If need be, we can stub out a specific method on an object:
        expect 'def' do
          stub('abc', :to_str => 'def'){ |a| a.to_str }
        end
      The stub is active during the execution of the block.
  § Overriding Constants
      Sometimes you need to override the value of a constant during the
      execution of some code.  Use ‹#with_const› to do just that:
        expect 'hello' do
          with_const 'A::B::C', 'hello' do
            A::B::C
          end
        end
      Here, the constant ‹A::B::C› is set to ‹'hello'› during the execution of
      the block.  None of the constants ‹A›, ‹B›, and ‹C› need to exist for
      this to work.  If a constant doesnΓÇÖt exist itΓÇÖs created and set to a new,
      empty, ‹Module›. The value of ‹A::B::C›, if any, is restored after the
      block returns and any constants that didnΓÇÖt previously exist are removed.
  § Overriding Environment Variables
      Another thing you often need to control in your tests is the value of
      environment variables.  Depending on such global values is, of course,
      not a good practice, but is often unavoidable when working with external
      libraries.  ‹#With_env› allows you to override the value of environment
      variables during the execution of a block by giving it a ‹Hash› of
      key/value pairs where the key is the name of the environment variable and
      the value is the value that it should have during the execution of that
      block:
        expect 'hello' do
          with_env 'INTRO' => 'hello' do
            ENV['INTRO']
          end
        end
      Any overridden values are restored and any keys that werenΓÇÖt previously a
      part of the environment are removed when the block returns.
  § Overriding Globals
      You may also want to override the value of a global temporarily:
        expect 'hello' do
          with_global :$stdout, StringIO.new do
            print 'hello'
            $stdout.string
          end
        end
      You thus provide the name of the global and a value that it should take
      during the execution of a block of code.  The block gets passed the
      overridden value, should you need it:
        expect true do
          with_global :$stdout, StringIO.new do |overridden|
            $stdout != overridden
          end
        end
§ Integration
    Lookout can be used from Rake┬╣.  Simply install Lookout-Rake┬▓:
      % gem install lookout-rake
    and add the following code to your Rakefile
      require 'lookout-rake-3.0'
      Lookout::Rake::Tasks::Test.new
    Make sure to read up on using Lookout-Rake for further benefits and
    customization.
┬╣ Read more about Rake at http://rake.rubyforge.org/
┬▓ Get information on Lookout-Rake at http://disu.se/software/lookout-rake/
§ API
    Lookout comes with an API┬╣ that letΓÇÖs you create things such as new
    expected values, difference reports for your types, and so on.
┬╣ See http://disu.se/software/lookout/api/
§ Interface Design
    The default output of Lookout can Spartanly be described as Spartan.  If no
    errors or failures occur, no output is generated.  This is unconventional,
    as unit testing frameworks tend to dump a lot of information on the user,
    concerning things such as progress, test count summaries, and flamboyantly
    colored text telling you that your tests passed.  None of this output is
    needed.  Your tests should run fast enough to not require progress reports.
    The lack of output provides you with the same amount of information as
    reporting success.  Test count summaries are only useful if youΓÇÖre worried
    that your tests arenΓÇÖt being run, but if you worry about that, then
    providing such output doesnΓÇÖt really help.  Testing your tests requires
    something beyond reporting some arbitrary count that you would have to
    verify by hand anyway.
    When errors or failures do occur, however, the relevant information is
    output in a format that can easily be parsed by an ‹'errorformat'› for Vim
    or with {Compilation Mode}┬╣ for Emacs┬▓.  Diffs are generated for Strings,
    Arrays, Hashes, and I/O.
┬╣ Read up on Compilation mode for Emacs at http://www.emacswiki.org/emacs/CompilationMode
┬▓ Visit The GNU FoundationΓÇÖs EmacsΓÇÖ software page at http://www.gnu.org/software/emacs/
§ External Design
    LetΓÇÖs now look at some of the points made in the introduction in greater
    detail.
    Lookout only allows you to set one expectation per test.  If youΓÇÖre testing
    behavior with a reception expectation, then only one method-invocation
    expectation can be set.  If youΓÇÖre testing state, then only one result can
    be verified.  It may seem like this would cause unnecessary duplication
    between tests.  While this is certainly a possibility, when you actually
    begin to try to avoid such duplication you find that you often do so by
    improving your interfaces.  This kind of restriction tends to encourage the
    use of value objects, which are easy to test, and more focused objects,
    which require simpler tests, as they have less behavior to test, per
    method.  By keeping your interfaces focused youΓÇÖre also keeping your tests
    focused.
    Keeping your tests focused improves, in itself, test isolation, but letΓÇÖs
    look at something that hinders it: setup and tear-down methods.  Most unit
    testing frameworks encourage test fragmentation by providing setup and
    tear-down methods.
    Setup methods create objects and, perhaps, just their behavior for a set of
    tests.  This means that you have to look in two places to figure out whatΓÇÖs
    being done in a test.  This may work fine for few methods with simple
    set-ups, but makes things complicated when the number of tests increases
    and the set-up is complex.  Often, each test further adjusts the previously
    set-up object before performing any verifications, further complicating the
    process of figuring out what state an object has in a given test.
    Tear-down methods clean up after tests, perhaps by removing records from a
    database or deleting files from the file-system.
    The duplication that setup methods and tear-down methods hope to remove is
    better avoided by improving your interfaces.  This can be done by providing
    better set-up methods for your objects and using idioms such as {Resource
    Acquisition Is Initialization}┬╣ for guaranteed clean-up, test or no test.
    By not using setup and tear-down methods we keep everything pertinent to a
    test in the test itself, thus improving test isolation.  (You also wonΓÇÖt
    {slow down your tests}┬▓ by keeping unnecessary state.)
    Most unit test frameworks also allow you to create arbitrary test helper
    methods.  Lookout doesnΓÇÖt.  The same rationale as that that has been
    crystallized in the preceding paragraphs applies.  If you need helpers
    youΓÇÖre interface isnΓÇÖt good enough.  It really is as simple as that.
    To clarify: thereΓÇÖs nothing inherently wrong with test helper methods, but
    they should be general enough that they reside in their own library.  The
    support for mocks in Lookout is provided through a set of test helper
    methods that make it easier to create mocks than it would have been without
    them.  Lookout-rack┬│ is another example of a library providing test helper
    methods (well, one method, actually) that are very useful in testing web
    applications that use Rack⁴.
    A final point at which some unit test frameworks try to fragment tests
    further is documentation.  These frameworks provide ways of describing the
    whats and hows of whatΓÇÖs being tested, the rationale being that this will
    provide documentation of both the test and the code being tested.
    Describing how a stack data structure is meant to work is a common example.
    A stack is, however, a rather simple data structure, so such a description
    provides little, if any, additional information that canΓÇÖt be extracted
    from the implementation and its tests themselves.  The implementation and
    its tests is, in fact, its own best documentation.  Taking the points made
    in the previous paragraphs into account, we should already have simple,
    self-describing, interfaces that have easily understood tests associated
    with them.  Rationales for the use of a given data structure or
    system-design design documentation is better suited in separate
    documentation focused at describing exactly those issues.
┬╣ Read the Wikipedia entry for Resource Acquisition Is Initialization at
  http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization
┬▓ Read how 37signals had problems with slow Test::Unit tests at
  http://37signals.com/svn/posts/2742-the-road-to-faster-tests/
┬│ Visit the Lookout-rack home page at
  http://disu.se/software/lookout-rack/
⁴ Visit the Rack Rubyforge project page at
  http://rack.rubyforge.org/
§ Internal Design
    The internal design of Lookout has had a couple of goals.
  ΓÇó   As few external dependencies as possible
  ΓÇó   As few internal dependencies as possible
  ΓÇó   Internal extensibility provides external extensibility
  ΓÇó   As fast load times as possible
  ΓÇó   As high a ratio of value objects to mutable objects as possible
  ΓÇó   Each object must have a simple, obvious name
  ΓÇó   Use mix-ins, not inheritance for shared behavior
  ΓÇó   As few responsibilities per object as possible
  ΓÇó   Optimizing for speed can only be done when you have all the facts
§ External Dependencies
    Lookout used to depend on Mocha for mocks and stubs.  While benchmarking I
    noticed that a method in Mocha was taking up more than 300 percent of the
    runtime.  It turned out that MochaΓÇÖs method for cleaning up back-traces
    generated when a mock failed was doing something incredibly stupid:
      backtrace.reject{ |l| Regexp.new(@lib).match(File.expand_path(l)) }
    Here ‹@lib› is a ‹String› containing the path to the lib sub-directory in
    the Mocha installation directory.  I reported it, provided a patch five
    days later, then waited.  Nothing happened.  {254 days later}┬╣, according
    to {Wolfram Alpha}┬▓, half of my patch was, apparently ΓÇô I say ΓÇ£apparentlyΓÇ¥,
    as I received no notification ΓÇô applied.  By that time I had replaced the
    whole mocking-and-stubbing subsystem and dropped the dependency.
    Many Ruby developers claim that Ruby and its gems are too fast-moving for
    normal package-managing systems to keep up.  This is testament to the fact
    that this isnΓÇÖt the case and that the real problem is instead related to
    sloppy practices.
    Please note that I donΓÇÖt want to single out the Mocha library nor its
    developers.  I only want to provide an example where relying on external
    dependencies can be ΓÇ£considered harmfulΓÇ¥.
┬╣ See the Wolfram Alpha calculation at http://www.wolframalpha.com/input/?i=days+between+march+17%2C+2010+and+november+26%2C+2010
┬▓ Check out the Wolfram Alpha computational knowledge engine at http://www.wolframalpha.com/
§ Internal Dependencies
    Lookout has been designed so as to keep each subsystem independent of any
    other.  The diff subsystem is, for example, completely decoupled from any
    other part of the system as a whole and could be moved into its own library
    at a time where that would be of interest to anyone.  WhatΓÇÖs perhaps more
    interesting is that the diff subsystem is itself very modular.  The data
    passes through a set of filters that depends on what kind of diff has been
    requested, each filter yielding modified data as it receives it.  If you
    want to read some rather functional Ruby I can highly recommend looking at
    the code in the ‹lib/lookout/diff› directory.
    This lookout on the design of the library also makes it easy to extend
    Lookout.  Lookout-rack was, for example, written in about four hours and
    about 5 of those 240 minutes were spent on setting up the interface between
    the two.
§ Optimizing For Speed
    The following paragraph is perhaps a bit personal, but might be interesting
    nonetheless.
    IΓÇÖve always worried about speed.  The original Expectations library used
    ‹extend› a lot to add new behavior to objects.  Expectations, for example,
    used to hold the result of their execution (what we now term ΓÇ£evaluationΓÇ¥)
    by being extended by a module representing success, failure, or error.  For
    the longest time I used this same method, worrying about the increased
    performance cost that creating new objects for results would incur.  I
    finally came to a point where I felt that the code was so simple and clean
    that rewriting this part of the code for a benchmark wouldnΓÇÖt take more
    than perhaps ten minutes.  Well, ten minutes later I had my results and
    they confirmed that creating new objects wasnΓÇÖt harming performance.  I was
    very pleased.
§ Naming
    I hate low lines (underscores).  I try to avoid them in method names and I
    always avoid them in file names.  Since the current ΓÇ£best practiceΓÇ¥ in the
    Ruby community is to put ‹BeginEndStorage› in a file called
    ‹begin_end_storage.rb›, I only name constants using a single noun.  This
    has had the added benefit that classes seem to have acquired less behavior,
    as using a single noun doesnΓÇÖt allow you to tack on additional behavior
    without questioning if itΓÇÖs really appropriate to do so, given the rather
    limited range of interpretation for that noun.  It also seems to encourage
    the creation of value objects, as something named ‹Range› feels a lot more
    like a value than ‹BeginEndStorage›.  (To reach object-oriented-programming
    Nirvana you must achieve complete value.)
§ News
  § 3.0.0
      The ‹xml› expectation has been dropped.  It wasn’t documented, didn’t
      suit very many use cases, and can be better implemented by an external
      library.
      The ‹arg› argument matcher for mock method arguments has been removed, as
      it didnΓÇÖt provide any benefit over using Object.
      The ‹#yield› and ‹#each› methods on stub and mock methods have been
      removed.  They were slightly weird and their use case can be implemented
      using block parameters instead.
      The ‹stub› method inside ‹expect› blocks now stubs out the methods during
      the execution of a provided block instead of during the execution of the
      whole except block.
      When a mock method is called too many times, this is reported
      immediately, with a full backtrace.  This makes it easier to pin down
      whatΓÇÖs wrong with the code.
      Query expectations were added.
      Explicit query expectations were added.
      Fluent boolean expectations, for example, ‹expect nil.to.be.nil?› have
      been replaced by query expectations (‹expect :nil? do nil end›) and
      explicit query expectations (‹expect result.to.be.nil? do nil end›).
      This was done to discourage creating objects as the expected value and
      creating objects that change during the course of the test.
      The ‹literal› expectation was added.
      Equality (‹#==›) is now checked before “caseity” (‹#===›) for modules,
      ranges, and regular expressions to match the documentation.
§ Financing
    Currently, most of my time is spent at my day job and in my rather busy
    private life.  Please motivate me to spend time on this piece of software
    by donating some of your money to this project.  Yeah, I realize that
    requesting money to develop software is a bit, well, capitalistic of me.
    But please realize that I live in a capitalistic society and I need money
    to have other people give me the things that I need to continue living
    under the rules of said society.  So, if you feel that this piece of
    software has helped you out enough to warrant a reward, please PayPal a
    donation to now@disu.se┬╣.  Thanks!  Your support wonΓÇÖt go unnoticed!
┬╣ Send a donation:
  https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=now%40disu%2ese&item_name=Lookout
§ Reporting Bugs
    Please report any bugs that you encounter to the {issue tracker}┬╣.
  ┬╣ See https://github.com/now/lookout/issues
§ Contributors
    Contributors to the original expectations codebase are mentioned there.  We
    hope no one on that list feels left out of this list.  Please
    {let us know}┬╣ if you do.
  ΓÇó   Nikolai Weibull
┬╣ Add an issue to the Lookout issue tracker at https://github.com/now/lookout/issues
§ Licensing
    Lookout is free software: you may redistribute it and/or modify it under
    the terms of the {GNU Lesser General Public License, version 3}┬╣ or later┬▓,
    as published by the {Free Software Foundation}┬│.
┬╣ See http://disu.se/licenses/lgpl-3.0/
┬▓ See http://gnu.org/licenses/
┬│ See http://fsf.org/
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity
0.0
                                   Inventory
  Inventory keeps track of the contents of your Ruby¹ projects.  Such an
  inventory can be used to load the project, create gem specifications and
  gems, run unit tests, compile extensions, and verify that the project’s
  content is what you think it is.
¹ See http://ruby-lang.org/
§ Usage
    Let’s begin by discussing the project structure that Inventory expects you
    to use.  It’s pretty much exactly the same as the standard Ruby project
    structure¹:
      ├── README
      ├── Rakefile
      ├── lib
      │   ├── foo-1.0
      │   │   ├── bar.rb
      │   │   └── version.rb
      │   └── foo-1.0.rb
      └── test
          └── unit
              ├── foo-1.0
              │   ├── bar.rb
              │   └── version.rb
              └── foo-1.0.rb
    Here you see a simplified version of a project called “Foo”’s project
    structure.  The only real difference from the standard is that the main
    entry point into the library is named “foo-1.0.rb” instead of “foo.rb” and
    that the root sub-directory of “lib” is similarly named “foo-1.0” instead
    of “foo”.  The difference is the inclusion of the API version.  This must
    be the major version of the project followed by a constant “.0”.  The
    reason for this is that it allows concurrent installations of different
    major versions of the project and means that the wrong version will never
    accidentally be loaded with require.
    There’s a bigger difference in the content of the files.
    ‹Lib/foo-1.0/version.rb› will contain our inventory instead of a String:
      require 'inventory-1.0'
      class Foo
        Version = Foo.new(1, 4, 0){
          authors{
            author 'A. U. Thor', 'a.u.thor@example.org'
          }
          homepage 'http://example.org/'
          licenses{
            license 'LGPLv3+',
                    'GNU Lesser General Public License, version 3 or later',
                    'http://www.gnu.org/licenses/'
          }
          def dependencies
            super + Dependencies.new{
              development 'baz', 1, 3, 0
              runtime 'goo', 2, 0, 0
              optional 'roo-loo', 3, 0, 0, :feature => 'roo-loo'
            }
          end
          def package_libs
            %w[bar.rb]
          end
        }
      end
    We’re introducing quite a few concepts at once, and we’ll look into each in
    greater detail, but we begin by setting the ‹Version› constant to a new
    instance of an Inventory with major, minor, and patch version atoms 1, 4,
    and 0.  Then we add a couple of dependencies and list the library files
    that are included in this project.
    The version numbers shouldn’t come as a surprise.  These track the version
    of the API that we’re shipping using {semantic versioning}².  They also
    allow the Inventory#to_s method to act as if you’d defined Version as
    ‹'1.4.0'›.
    Next follows information about the authors of the project, the project’s
    homepage, and the project’s licenses.  Each author has a name and an email
    address.  The homepage is simply a string URL.  Licenses have an
    abbreviation, a name, and a URL where the license text can be found.
    We then extend the definition of ‹dependencies› by adding another set of
    dependencies to ‹super›.  ‹Super› includes a dependency on the version of
    the inventory project that’s being used with this project, so you’ll never
    have to list that yourself.  The other three dependencies are all of
    different kinds: development, runtime, and optional.  A development
    dependency is one that’s required while developing the project, for
    example, a unit-testing framework, a documentation generator, and so on.
    Runtime dependencies are requirements of the project to be able to run,
    both during development and when installed.  Finally, optional dependencies
    are runtime dependencies that may or may not be required during execution.
    The difference between runtime and optional is that the inventory won’t try
    to automatically load an optional dependency, instead leaving that up to
    you to do when and if it becomes necessary.  By that logic, runtime
    dependencies will be automatically loaded, which is a good reason for
    having dependency information available at runtime.
    The version numbers of dependencies also use semantic versioning, but note
    that the patch atom is ignored unless the major atom is 0.  You should
    always only depend on the major and minor atoms.
    As mentioned, runtime dependencies will be automatically loaded and the
    feature they try to load is based on the name of the dependency with a
    “-X.0” tacked on the end, where ‘X’ is the major version of the dependency.
    Sometimes, this isn’t correct, in which case the :feature option may be
    given to specify the name of the feature.
    You may also override other parts of a dependency by passing in a block to
    the dependency, much like we’re doing for inventories.
    The rest of an inventory will list the various files included in the
    project.  This project only consists of one additional file to those that
    an inventory automatically include (Rakefile, README, the main entry point,
    and the version.rb file that defines the inventory itself), namely the
    library file ‹bar.rb›.  Library files will be loaded automatically when the
    main entry point file loads the inventory.  Library files that shouldn’t be
    loaded may be listed under a different heading, namely “additional_libs”.
    Both these sets of files will be used to generate a list of unit test files
    automatically, so each library file will have a corresponding unit test
    file in the inventory.  We’ll discuss the different headings of an
    inventory in more detail later on.
    Now that we’ve written our inventory, let’s set it up so that it’s content
    gets loaded when our main entry point gets loaded.  We add the following
    piece of code to ‹lib/foo-1.0.rb›:
      module Foo
        load File.expand_path('../foo-1.0/version.rb', __FILE__)
        Version.load
      end
    That’s all there’s to it.
    The inventory can also be used to great effect from a Rakefile using a
    separate project called Inventory-Rake³.  Using it’ll give us tasks for
    cleaning up our project, compiling extensions, installing dependencies,
    installing and uninstalling the project itself, and creating and pushing
    distribution files to distribution points.
      require 'inventory-rake-1.0'
      load File.expand_path('../lib/foo-1.0/version.rb', __FILE__)
      Inventory::Rake::Tasks.define Foo::Version
      Inventory::Rake::Tasks.unless_installing_dependencies do
        require 'lookout-rake-3.0'
        Lookout::Rake::Tasks::Test.new
      end
    It’s ‹Inventory::Rake::Tasks.define› that does the heavy lifting.  It takes
    our inventory and sets up the tasks mentioned above.
    As we want to be able to use our Rakefile to install our dependencies for
    us, the rest of the Rakefile is inside the conditional
    #unless_installing_dependencies, which, as the name certainly implies,
    executes its block unless the task being run is the one that installs our
    dependencies.  This becomes relevant when we set up Travis⁴ integration
    next.  The only conditional set-up we do in our Rakefile is creating our
    test task via Lookout-Rake⁵, which also uses our inventory to find the unit
    tests to run when executed.
    Travis integration is straightforward.  Simply put
      before_script:
        - gem install inventory-rake -v '~> VERSION' --no-rdoc --no-ri
        - rake gem:deps:install
    in the project’s ‹.travis.yml› file, replacing ‹VERSION› with the version
    of Inventory-Rake that you require.  This’ll make sure that Travis installs
    all development, runtime, and optional dependencies that you’ve listed in
    your inventory before running any tests.
    You might also need to put
      env:
        - RUBYOPT=rubygems
    in your ‹.travis.yml› file, depending on how things are set up.
¹ Ruby project structure: http://guides.rubygems.org/make-your-own-gem/
² Semantic versioning: http://semver.org/
³ Inventory-Rake: http://disu.se/software/inventory-rake-1.0/
⁴ Travis: http://travis-ci.org/
⁵ Lookout-Rake: http://disu.se/software/lookout-rake-3.0/
§ API
    If the guide above doesn’t provide you with all the answers you seek, you
    may refer to the API¹ for more answers.
¹ See http://disu.se/software/inventory-1.0/api/Inventory/
§ Financing
    Currently, most of my time is spent at my day job and in my rather busy
    private life.  Please motivate me to spend time on this piece of software
    by donating some of your money to this project.  Yeah, I realize that
    requesting money to develop software is a bit, well, capitalistic of me.
    But please realize that I live in a capitalistic society and I need money
    to have other people give me the things that I need to continue living
    under the rules of said society.  So, if you feel that this piece of
    software has helped you out enough to warrant a reward, please PayPal a
    donation to now@disu.se¹.  Thanks!  Your support won’t go unnoticed!
¹ Send a donation:
  https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=now@disu.se&item_name=Inventory
§ Reporting Bugs
    Please report any bugs that you encounter to the {issue tracker}¹.
  ¹ See https://github.com/now/inventory/issues
§ Authors
    Nikolai Weibull wrote the code, the tests, the documentation, and this
    README.
§ Licensing
    Inventory is free software: you may redistribute it and/or modify it under
    the terms of the {GNU Lesser General Public License, version 3}¹ or later²,
    as published by the {Free Software Foundation}³.
¹ See http://disu.se/licenses/lgpl-3.0/
² See http://gnu.org/licenses/
³ See http://fsf.org/
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Activity