Reviewing papers: Difference between revisions

From GO Wiki
Jump to navigation Jump to search
Line 66: Line 66:
# functional prediction using GO
# functional prediction using GO
# GO Tools
# GO Tools
#* Do existing tools have this functionality? if so what additional benifits does the tool provide?
#* Do existing tools have this functionality?  
#* What additional benifits does the tool provide?
# Papers that evaluate biological results using GO enrichment
# Papers that evaluate biological results using GO enrichment
#* Have the authors considered enrichment of unknown terms (i.e those annotated to the root node), many tools (including GO term finder), do not handle unknown terms at present
#* Have the authors considered enrichment of unknown terms (i.e those annotated to the root node), many tools (including GO term finder), do not handle unknown terms at present

Revision as of 15:49, 21 July 2007

The purpose of this page is to support the members of the GOC by providing guidelines on how to review a paper. Overall, does the research question posed make sense and does the methodology outlined address the question and do the data and results support the conclusions

Assessing the scientific merit

(added some and moved general from the bottom ,val)

General

  1. Does the research question posed make sense
  2. Does the methodology outlined address the question
  3. Do the data and results support the conclusions
  4. Is this work a significant advance or .... over the previous literature
  5. Is there anything in the paper that seems unsure/controversial/inconclusive (Q what do we do in this case)

Maybe this needs another heading /reviewers response?

  1. Reviewers should provide specific citations of claims made in the review


Comment Val, perhaps the points below (which were under assessing scientific merit) should go in the relevant sections below. These were specific problems applicable to this particular paper, other papers wil have other issues and this document should be more general?:


  1. There should be a minimal number of simplifying assumptions (move to methodology? )
  2. They show an awaresness of the rigor curators apply when inferring annotations from sequence similarity (move to source data interpretation? applies only to papers which make some special case about ISS, for example assessing accuracy)
  3. They take into account the relative completeness or incompleteness of annotation, if this is relevant to their analysis and results (results)
  4. Blast scores should be used intelligently, not simply arbitrary cut-offs (move to method

ology)

  1. They should validate their methods or results against examples of incorrect annotations. (methods/results)

Materials, that is, the data sets used as input

  1. Are datasets used appropriate to answer the research question
  2. Is the publication explicit about
    • Which data sources it has used (version of ontology/ date of annotation files/ numbers and types of annotations used)
    • How it has partitioned these resources (which subsets of the data were used for which parts of the analysis)

Source Data Interpretation

  1. They take account of the DAG structure in their analysis, and show a consideration for the consequences of misuse of the DAG (direct vs. indirect annotations)
  2. Understanding of current curation best practices
    • Understanding of procedures used by contributing MODs for the evaluation of similarity/orthology for functional transfer
    • Evidence code (meaning and use) particularly with respect to inferring functions based on sequence similarity (i.e only from experimental sources)
  3. Awareness of the possible incomplete nature of the annotations due to the

curation backlog

  1. Qualifier awareness (especially NOT)

Methodology

  1. Is the reviewer familiar with the specific type of analysis used in the manuscript
  2. Is the test relevant to the research question
  3. Are algorithms developed and reasoning used to evaluate the data robust for the purposes
  4. Are any statistical tests used appropriate to evaluate significance
  5. Use of third party software -are version, parameters, cut-offs specified
  6. Reproducibility Are the methods for makign the conclusions fully described and reproducble

Results

Are all appropriate citations made to

  1. Support scientific statements
  2. Recognise previous work
  3. Describe input data sources

Different types of papers

We should also probably have different guidelines for different types of papers:

  1. Technical papers which use GO
  2. annotation accuracy assessment
  3. functional prediction using GO
  4. GO Tools
    • Do existing tools have this functionality?
    • What additional benifits does the tool provide?
  5. Papers that evaluate biological results using GO enrichment
    • Have the authors considered enrichment of unknown terms (i.e those annotated to the root node), many tools (including GO term finder), do not handle unknown terms at present
  6. etc...

Other notes

Perhaps we should also look at journals guidelines and criteria for assessing informatics papers, and especially multidisciplinary papers (as GO papers often increasingly are). A reviewer might be competent to review the biological or technical content, but not vice versa.