Reviewing papers: Difference between revisions

From GO Wiki
Jump to navigation Jump to search
Line 9: Line 9:
# Does the methodology outlined address the question
# Does the methodology outlined address the question
# Do the data and results support the conclusions  
# Do the data and results support the conclusions  
# Is this work a significant advance or .... over the previous literature
# Is there anything in the paper that seems unsure/controversial/inconclusive (Q what do we do in this case)


Maybe this needs another heading /reviewers response?
# Reviewers should provide specific citations of claims made in the review


Comment Val, perhaps the points below should go in  the relevant sections below. These were specific problems applicable  
 
Comment Val, perhaps the points below which were under assessing scientific merit) should go in  the relevant sections below. These were specific problems applicable  
to this particular paper, other papers wil have other issues and this document should be more general?:
to this particular paper, other papers wil have other issues and this document should be more general?:



Revision as of 15:29, 21 July 2007

The purpose of this page is to support the members of the GOC by providing guidelines on how to review a paper. Overall, does the research question posed make sense and does the methodology outlined address the question and do the data and results support the conclusions

Assessing the scientific merit

(added some and moved general fromthe bottom ,val)

General

  1. Does the research question posed make sense
  2. Does the methodology outlined address the question
  3. Do the data and results support the conclusions
  4. Is this work a significant advance or .... over the previous literature
  5. Is there anything in the paper that seems unsure/controversial/inconclusive (Q what do we do in this case)

Maybe this needs another heading /reviewers response?

  1. Reviewers should provide specific citations of claims made in the review


Comment Val, perhaps the points below which were under assessing scientific merit) should go in the relevant sections below. These were specific problems applicable to this particular paper, other papers wil have other issues and this document should be more general?:


  1. There should be a minimal number of simplifying assumptions (move to methods? )
  2. They show an awaresness of the rigor curators apply when inferring annotations from sequence similarity (move to methods? applies only to papers which make some special case about ISS, for example assessing accuracy)
  3. They take into account the relative completeness or incompleteness of annotation, if this is relevant to their analysis and results (results)
  4. Blast scores should be used intelligently, not simply arbitrary cut-offs (move to methods)
  5. They should validate their methods or results against examples of incorrect annotations. (methods/results)

Materials, that is, the data sets used as input

  1. Are datasets used appropriate to answer the research question
  2. Is the publication explicit about which data sources it has used and how it has partitioned these resources (i.e. version of ontology/ date of annotation files)

Source Data Interpretation

  1. They take account of the DAG structure in their analysis, and show a consideration for the consequences of misuse of the DAG (direct vs. indirect annotations)
  2. Understanding of current curation best practices
    • Understanding of procedures used by contributing MODs for the evaluation of similarity/orthology for functional transfer
    • Evidence code (meaning and use) particularly with respect to inferring functions based on sequence similarity (i.e only from experimental sources)
  3. Awareness of the possible incomplete nature of the annotations due to the

curation backlog

  1. Qualifier awareness (especially NOT)

Methodology

  1. Is the reviewer familiar with the specific type of analysis used in the manuscript
  2. Is the test relevant to the research question
  3. Are algorithms developed and reasoning used to evaluate the data robust for the purposes
  4. Are any statistical tests used appropriate to evaluate significance
  5. Use of third party software -are version, parameters, cut-offs specified
  6. Reproducibility Are the methods for makign the conclusions fully described and reproducble

Are all appropriate citations made to

  1. Support scientific statements
  2. Recognise previous work
  3. Describe input data sources


Different types of papers

We should also probably have different guidelines for different types of papers:

  1. Technical papers which use GO
  2. annotation accuracy assessment
  3. functional prediction using GO
  4. GO Tools
  5. Papers that evaluate biological results using GO enrichment
  6. etc...

Other notes

Perhaps we should also look at journals guidelines and criteria for assessing informatics papers, and especially multidisciplinary papers (as GO papers often increasingly are). A reviewer might be competent to review the biological or technical content, but not vice versa.