Reviewing papers

From GO Wiki
Jump to navigation Jump to search

The purpose of this page is to support the members of the GOC by providing guidelines on how to review a paper which implements GO.

Assessing the scientific merit

(added some and moved general from the bottom ,val)


  1. Does the research question posed make sense
  2. Does the methodology outlined address the questions posed
  3. Do the data and results support the conclusions
  4. Is this work a significant advance over the previous literature
  5. Is there anything in the paper that seems unsure/controversial/inconclusive (Q what do we do in this case)


Comment Val, perhaps the points below (which were under assessing scientific merit) should go in the relevant sections below. These were specific problems applicable to this particular paper, other papers wil have other issues and this document should be more general?:


  1. There should be a minimal number of simplifying assumptions (move to methodology? )
  2. They show an awaresness of the rigor curators apply when inferring annotations from sequence similarity (move to source data interpretation? applies only to papers which make some special case about ISS, for example assessing accuracy)
  3. They take into account the relative completeness or incompleteness of annotation, if this is relevant to their analysis and results (results)
  4. Blast scores should be used intelligently, not simply arbitrary cut-offs (move to method

ology). An awareness that annotation providers do not use Blast with an arbitrary cut-off for making annotations inferred from sequence similarity.

  1. Are methods and/or results validated against examples if appropriate (e.g. incorrect annotations for a paper about annotation accuracy, other examples?). (move to methods/results)

Materials, that is, the data sets used as input

  1. Are the datasets used appropriate to answer the research question posed
  2. Is the paper explicit about
    • Which data sources it has used (version of ontology/ date of annotation files/ numbers and types of annotations used)
    • How it has partitioned these resources (which subsets of the data were used for which parts of the analysis)

Source Data Interpretation

  1. Do the authors take account of the DAG structure in their analysis, and show a consideration for the consequences of misuse of the DAG (direct vs. indirect annotations).
  2. Do the authors display an understanding of how current curation best practices may affect their results and/or conclusions. This includes but is not limited to:
    • Understanding of procedures used by contributing MODs for the evaluation of similarity/orthology for functional transfer
      • test
    • Evidence code (meaning and use) particularly with respect to inferring functions based on sequence similarity (i.e only from experimental sources)
  3. Awareness of the possible incomplete nature of the annotations due to the

curation backlog

  1. Qualifier awareness (especially NOT)

Methodology

  1. Is the reviewer familiar with the specific type of analysis used in the manuscript
  2. Is the test relevant to the research question
  3. Are algorithms developed and reasoning used to evaluate the data robust for the purposes
  4. Are any statistical tests used appropriate to evaluate significance
  5. Use of third party software -are version, parameters, cut-offs specified
  6. Reproducibility Are the methods for makign the conclusions fully described and reproducble

Results

Are all appropriate citations made to

  1. Support scientific statements
  2. Recognise previous work
  3. Describe input data sources

Miscellaneous

  1. Is the terminology used to describe aspects of GO (terms/annotations etc), and general description of GO correct
  2. Reviewers should provide specific citations of claims made in the review

Different types of papers

We should also probably have different guidelines for different types of papers:

  1. Technical papers which use GO
  2. Assessment of annotation accuracy
  3. Functional prediction using GO
    • Are these truly 'functional predictions' or are they annotation ommissions (i.e would the annotations be made from sequence similarity or other methods if the annotations were complete?)
  1. GO Tools
    • Do existing tools have this functionality?
    • What additional benifits does the tool provide?
    • How frequently will the resource be updated (ontology and annotations)
    • Expand from list to GO tool submitters
  2. Papers that evaluate biological results using GO enrichment
    • Have the authors considered enrichment of unknown terms (i.e those annotated to the root node), many tools (including GO term finder), do not handle unknown terms at present
  3. etc...

Other notes

Perhaps we should also look at journals guidelines and criteria for assessing informatics papers, and especially multidisciplinary papers (as GO papers often increasingly are). A reviewer might be competent to review the biological or technical content, but not vice versa.