Optimal strategy and options for iterative maker2 runs
Hello maker community, I am annotating with maker2 a fungal genome for which I have transcript evidence, plus transcripts and proteins from closely related species, a genemark .mod file from self-training I have run outside of maker, and an augustus model from a closely related species. I plan to run it iteratively, updating snap (and maybe augustus) models each time. Reading several iterative-maker pipelines online, I am a bit confused on the optimal strategy, and some details on the options used in consecutive runs. Some questions:
1) How will MAKER behave in the case where I would supply my different lines of evidence (EST+protein) along with trained abinitio models in the same run? Here is -what seems to me conflicting- info from posts I read (not in this list): "if est2genome and protein2genome are set to 1 + abinitio tools are also on, the abinitio tools will not use the EST-protein evidence to improve their gene models." but: "In case you activated SNAP and Augustus and you have fed MAKER with lines of evidence (Transcripts and proteins), it will predict gene models using Augustus-Evidence-driven and SNAP-Evidence-driven. In loci where both are present, it will chose the best one according to the lines of evidence (EST / protein when they are present)." Which one is correct? 2) I see in a few tutorials that genemark is trained at a 3rd/4th run and separately from other abinitio programs. I don't understand why, since genemark is self-trained on the genome, so it doesnot really interact with training from evidence or maker gff files? 3) Can I pass >1 abinitio models from one run to the next using the pred_gff option? For example augustus+genemark hmms, separated by ","? In a 2017 post, Carson writes "I would avoid passing in Augustus results as GFF3, it removes the ability of MAKER to dynamically provide Augustus with hints as it runs". What is the correct way then?
Any input from experienced maker users is welcome! Thank you in advance, Anastasia Gioti