Forum:Do you need a deduplication tool for FASTQ data in fastp?
Hi, I am the author of fastp, a tool to provide ultra-fast all-in-one FASTQ preprocessing functions.
This tool has received 500+ stars in github (github.com/OpenGene/fastp), and has been cited for 40+ times since its paper published in Bioinformatics about 8 months ago.
Now I am considering to add a deduplication function to it. This may require some effort to implement it. So I think I should ask the users here, whether people need this feature.
You replies will be very appreciated. I will continue to improve this tool.
• 2.4k views
I will say that de-duplication is a far more complex concept than what people/end users initially assume. Even interpreting the meaning of a deduplication plot is far from trivial – I had to give it two tries myself.
In the early times of sequencing the coverages were low, the sequencing process error-prone, tools were unable to cope with identical reads – and just about all duplicates were artificial. Today the coverages are much higher the occurrence of natural duplicates far more prevalent. SNP calling tools can recognize and deal with artificial duplicates from the data itself. Thus need to deduplicate reads is less critical.
That being said if you can write a fast and efficient read deduplicator, there is most certainly room for that. Especially if it would integrate with an existing toolset (fastp). The very fact that a new fastq processor can be successful after all these years demonstrates that there is always room for a well-written tool.
I will also concur with genomax that a read data simulator would also be something that would help a lot of people. Today the field is very fragmented, one needs a different tool for each target and the usages are clumsy.
If you’re going to implement something along those lines, model it after clumpify from bbmap, wherein optical duplicates are what are marked and the distance between clusters for calling duplicates is user modifiable. Marking optical duplicates is one of the few instances where duplicates should be marked directly on fastq files. As an aside, clumpify usually works very well and very quickly. There are a few cases (usually when the rate of optical duplication is quite high) that it uses hundreds of GB of RAM and eventually crashes. If you can come up with something that has similar performance (in terms of time) but has lower worst-case memory requirements then that’d be awesome.
Traffic: 2691 users visited in the last hour
Read more here: Source link