Full Text Indexing Progress

From GO Wiki
Revision as of 19:29, 23 September 2010 by Sjcarbon (talk | contribs)
Jump to navigation Jump to search

Overview

There are two separate fronts of progress for FTI. The first is in the indexing system itself ("system"); this would include things like software used (Solr, Jetty, etc.), schema, deployment, hardware, and other low-level issues that are probably not going to be hugely important to end-of-the-line users and programmers. The second is the consumption and use of FTI ("software"). This would include the integration into various pieces of software, services built up around FTI, and (possibly) abstraction APIs.

While there are some blurry points in this distinction (e.g. what about a JSON service built directly into the engine), hopefully it will provide a logical way to divide most of the problems that will be faced.

Goals

A changeable list of goals as we progress:

  • Produce a basic stand alone FTI based on Solr.
  • Make sure it's better than the previous attempts (benchmark).
  • Convert services currently consuming old FTI to Solr.
    • Likely replace current autocomplete with Solr proxy calls.
    • Replace current AmiGO LiveSearch system using AmiGO::External calls.
    • AmiGO API for canonicalizer.
  • Move to new/public hardware.
  • Create public interface.
  • Produce version with "complicated" schema.
    • Terms in GPs.
    • ? GPs in terms. (necessary?)
    • Try associations and evidence.
  • Fitness for purpose tests.
    • "Big join" test.
    • See if scaling works as desired.
    • Try other proxies/balancers (Nginx, Cherokee, etc.).
    • Functional as virtualized service (see Virtualization).
  • Implement smart and robust canonicalizer for GPs and terms.
  • Create rich searching interfaces using new engine.
    • Final would need to be combined with "ontology engine".

Some of these are out of order or depend on something elsewhere in the list.

System Progress

Installation

Solr on Jetty, with an Apache proxy, is currently installed on a BBOP development workstation:

Currently, it is not terribly useful unless you're sending it the right commands. It probably won't be played with again at least until AmiGO 1.8 is out and we try and switch the search backend and autocomplete over for 1.9.

The current setup is defined by files in the GO SVN repo on SF.net. [1]

We'll move to something more robust and public as soon as possible.

Schema

The production schema is essentially the SQL commands used to generate the data for Lucene, in XML format. [2]

The Lucene schema is how the GO data (taken by the production schema) is interpreted for use in Lucene. [3]

It uses a very flat and basic schema, with small lists for things like synonyms. To make it generally usable term accs and gp dbxrefs are both mapped to "acc" and term name and gp full_name are both mapped to "name".

In the future, we'll want to have a richer schema that attempts to store valuable commonly used information. The main example being association information directly stored in gps and terms. This, coupled with an association index (for example, all "GO:9988776xUniProt", with all direct listings stored) and ontology engine may cover much of the ground of relational searches, but much faster.

Software Progress

The first steps will be converting the autocomplete to use Solr and have AmiGO use it to run LiveSearch.

Not much yet. More to come.

Past Experiments

Past experiments for FTI have included various combinations of:

  • Perl/CLucene
  • Xapian
  • Apache mod_perl
  • FCGI
  • Ruby/Ferret