Full Text Indexing Progress: Difference between revisions

From GO Wiki
Jump to navigation Jump to search
No edit summary
mNo edit summary
 
(19 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=DEPRECATED=
Please see [[GOlr]] instead.
=Overview=
=Overview=


Line 22: Line 26:
* <strike>Make sure it's better than the previous attempts (benchmark).</strike>
* <strike>Make sure it's better than the previous attempts (benchmark).</strike>
* Convert services currently consuming old FTI to Solr.
* Convert services currently consuming old FTI to Solr.
** Likely replace current autocomplete with Solr proxy calls.
** <strike>Replace current autocomplete with Solr calls.</strike>
** Replace current AmiGO LiveSearch system using AmiGO::External calls.
** <strike>AmiGO LiveSearch using Solr.</strike>
** AmiGO API for canonicalizer.
* Move to new/public hardware.
* Move to new/public hardware.
* Create public interface.
* Create public interface.
* Produce version with "complicated" schema and test for practical speed.
* Produce version with "complicated" schema.
** <strike>Terms</strike>
** <strike>Try associations and evidence.</strike>
** <strike>Annotations</strike>
** GPs (necessary)
* Fitness for purpose tests.
** "Big join" test.
** "Big join" test.
* See if scaling is practical.
** See if scaling works as desired.
** Try other proxies/balancers (Nginx, Cherokee, etc.).
** Try other proxies/balancers (Nginx, Cherokee, etc.).
** Functional as virtualized service (see [[Virtualization]]).
** Functional as virtualized service (see [[Virtualization]]).
* Create rich searching interfaces using new engine.
* Implement smart and robust canonicalizer for GPs and terms.
* Create rich searching interfaces using new engine. (Working)
** Final would need to be combined with "ontology engine".
** Final would need to be combined with "ontology engine".
Some of these are out of order or depend on something elsewhere in the
list.


=System Progress=
=System Progress=
Line 38: Line 52:
==Installation==
==Installation==


Solr on Jetty, with an Apache proxy, is currently installed on a BBOP
Solr on Jetty <strike>with an Apache proxy,</strike> is currently installed on a BBOP
development workstation:
development workstation for experimentation:


* http://accordion.lbl.gov:8080/solr
* http://accordion.lbl.gov:8080/solr


Currently, it is not terribly useful unless you're sending it the
This backend will replace the majority of lifting done in AmiGO. The current setup is defined by files in the GO SVN repo on SF.net. [http://geneontology.svn.sourceforge.net/viewvc/geneontology/java/gold/solr/]
right commands. It probably won't be played with again at least until
AmiGO 1.8 is out and we try and switch the search backend and
autocomplete over for 1.9.
 
The current setup is defined by files in the GO SVN repo on SF.net.
[http://geneontology.svn.sourceforge.net/viewvc/geneontology/java/solr/]


We'll move to something more robust and public as soon as possible.
We'll move to something more robust and less experimental as soon as possible.


==Schema==
==Schema==


The production schema is essentially the SQL commands used to generate
The production schema is essentially the SQL commands used to generate
the data for Lucene, in XML format.
the data for the Lucene index, in XML format. Note that it is feeding off of GOLD.
[http://geneontology.svn.sourceforge.net/viewvc/geneontology/java/solr/solr/go-data-config.xml?revision=2881&view=markup]
[http://geneontology.svn.sourceforge.net/viewvc/geneontology/java/gold/solr/conf/gold-pg-config.xml]


The Lucene schema is how the GO data (taken by the production schema)
The Lucene schema is how the GO data (taken by the production schema)
is interpreted for use in Lucene.
is interpreted for use in Lucene.
[http://geneontology.svn.sourceforge.net/viewvc/geneontology/java/solr/solr/schema.xml?revision=2934&view=markup]
[http://geneontology.svn.sourceforge.net/viewvc/geneontology/java/gold/solr/conf/schema.xml]


Right now, the setup uses very flat and basic schema, with small lists
It uses a very flat and basic schema, with small lists for things like
for things like synonyms. In the future, we'll want to have a richer
synonyms. To make it generally usable (i.e. have an index for all aspects that can be searched generically), certain items are overloaded into the same field. For example, label is used in multiple ways.
schema that attempts to store valuable commonly used information. The
main example being association information directly stored in gps and
terms. This, coupled with an association index (for example, all
"GO:9988776xUniProt", with all direct listings stored) and ontology
engine may cover much of the ground of relational searches, but much
faster.


=Software Progress=
=Software Progress=


The first steps will be converting the autocomplete to use Solr and
There is now a [http://amigo.berkeleybop.org/cgi-bin/amigo/amigo?mode=live_search_gold live search] and a term completion component that feed off of the Solr index.
have AmiGO use it to run LiveSearch.


Not much yet. More to come.
These services largely consume the direct JSON service from the Solr server. This will have to change in the future due to security and integrity issues.


=Past Experiments=
=Past Experiments=
Line 93: Line 94:
[[Category:AmiGO]]
[[Category:AmiGO]]
[[Category:Software]]
[[Category:Software]]
[[Category:Software Progress]]

Latest revision as of 10:18, 25 October 2017

DEPRECATED

Please see GOlr instead.

Overview

There are two separate fronts of progress for FTI. The first is in the indexing system itself ("system"); this would include things like software used (Solr, Jetty, etc.), schema, deployment, hardware, and other low-level issues that are probably not going to be hugely important to end-of-the-line users and programmers. The second is the consumption and use of FTI ("software"). This would include the integration into various pieces of software, services built up around FTI, and (possibly) abstraction APIs.

While there are some blurry points in this distinction (e.g. what about a JSON service built directly into the engine), hopefully it will provide a logical way to divide most of the problems that will be faced.

Goals

A changeable list of goals as we progress:

  • Produce a basic stand alone FTI based on Solr.
  • Make sure it's better than the previous attempts (benchmark).
  • Convert services currently consuming old FTI to Solr.
    • Replace current autocomplete with Solr calls.
    • AmiGO LiveSearch using Solr.
    • AmiGO API for canonicalizer.
  • Move to new/public hardware.
  • Create public interface.
  • Produce version with "complicated" schema.
    • Terms
    • Try associations and evidence.
    • Annotations
    • GPs (necessary)
  • Fitness for purpose tests.
    • "Big join" test.
    • See if scaling works as desired.
    • Try other proxies/balancers (Nginx, Cherokee, etc.).
    • Functional as virtualized service (see Virtualization).
  • Implement smart and robust canonicalizer for GPs and terms.
  • Create rich searching interfaces using new engine. (Working)
    • Final would need to be combined with "ontology engine".

Some of these are out of order or depend on something elsewhere in the list.

System Progress

Installation

Solr on Jetty with an Apache proxy, is currently installed on a BBOP development workstation for experimentation:

This backend will replace the majority of lifting done in AmiGO. The current setup is defined by files in the GO SVN repo on SF.net. [1]

We'll move to something more robust and less experimental as soon as possible.

Schema

The production schema is essentially the SQL commands used to generate the data for the Lucene index, in XML format. Note that it is feeding off of GOLD. [2]

The Lucene schema is how the GO data (taken by the production schema) is interpreted for use in Lucene. [3]

It uses a very flat and basic schema, with small lists for things like synonyms. To make it generally usable (i.e. have an index for all aspects that can be searched generically), certain items are overloaded into the same field. For example, label is used in multiple ways.

Software Progress

There is now a live search and a term completion component that feed off of the Solr index.

These services largely consume the direct JSON service from the Solr server. This will have to change in the future due to security and integrity issues.

Past Experiments

Past experiments for FTI have included various combinations of:

  • Perl/CLucene
  • Xapian
  • Apache mod_perl
  • FCGI
  • Ruby/Ferret