Full Text Indexing Progress
Please see GOlr instead.
There are two separate fronts of progress for FTI. The first is in the indexing system itself ("system"); this would include things like software used (Solr, Jetty, etc.), schema, deployment, hardware, and other low-level issues that are probably not going to be hugely important to end-of-the-line users and programmers. The second is the consumption and use of FTI ("software"). This would include the integration into various pieces of software, services built up around FTI, and (possibly) abstraction APIs.
While there are some blurry points in this distinction (e.g. what about a JSON service built directly into the engine), hopefully it will provide a logical way to divide most of the problems that will be faced.
A changeable list of goals as we progress:
Produce a basic stand alone FTI based on Solr. Make sure it's better than the previous attempts (benchmark).
- Convert services currently consuming old FTI to Solr.
Replace current autocomplete with Solr calls. AmiGO LiveSearch using Solr.
- AmiGO API for canonicalizer.
- Move to new/public hardware.
- Create public interface.
- Produce version with "complicated" schema.
Terms Try associations and evidence. Annotations
- GPs (necessary)
- Fitness for purpose tests.
- "Big join" test.
- See if scaling works as desired.
- Try other proxies/balancers (Nginx, Cherokee, etc.).
- Functional as virtualized service (see Virtualization).
- Implement smart and robust canonicalizer for GPs and terms.
- Create rich searching interfaces using new engine. (Working)
- Final would need to be combined with "ontology engine".
Some of these are out of order or depend on something elsewhere in the list.
Solr on Jetty
with an Apache proxy, is currently installed on a BBOP
development workstation for experimentation:
This backend will replace the majority of lifting done in AmiGO. The current setup is defined by files in the GO SVN repo on SF.net. 
We'll move to something more robust and less experimental as soon as possible.
The production schema is essentially the SQL commands used to generate the data for the Lucene index, in XML format. Note that it is feeding off of GOLD. 
The Lucene schema is how the GO data (taken by the production schema) is interpreted for use in Lucene. 
It uses a very flat and basic schema, with small lists for things like synonyms. To make it generally usable (i.e. have an index for all aspects that can be searched generically), certain items are overloaded into the same field. For example, label is used in multiple ways.
There is now a live search and a term completion component that feed off of the Solr index.
These services largely consume the direct JSON service from the Solr server. This will have to change in the future due to security and integrity issues.
Past experiments for FTI have included various combinations of:
- Apache mod_perl