Bioinformatics with Python Cookbook
上QQ阅读APP看书,第一时间看更新

Getting ready

The Human 1,000 Genomes Project aims to catalog worldwide human genetic variation and takes advantage of modern sequencing technology to do WGS. This project makes all data publicly available, which includes output from sequencers, sequence alignments, and SNP calls, among many other artifacts. The name "1,000 Genomes" is actually a misnomer, because it currently includes more than 2,500 samples. These samples are divided into 26 populations, spanning the whole planet. We will mostly use data from four populations: African Yorubans (YRI), Utah Residents with Northern and Western European Ancestry (CEU), Japanese in Tokyo (JPT), and Han Chinese in Beijing (CHB). The reason we chose these specific populations is because they were the first ones that came from HapMap, an old project with similar goals. They used genotyping arrays to find out more about the quality of this subset. We will revisit the 1,000 Genomes, and HapMap projects in Chapter 4, Population Genetics.

Next-generation datasets are generally very large. As we will be using real data, some of the files that you will download will be big. While I have tried to choose the smallest real examples possible, you will still need a good network connection and a considerably large amount of disk space. Waiting for the download will probably be your biggest hurdle in this recipe, but data management is a serious problem with NGS. In real life, you will need to budget time for data transfer, allocate disk space (which can be financially costly), and consider backup policies. The most common initial mistake with NGS is to think that these problems are trivial, but they are not. An operation such as copying a set of BAM files to a network, or even to your computer, will become a headache. Be prepared. After downloading large files, at the very least, you should check that the size is correct. Some databases offer MD5 checksums. You can compare these checksums with the ones on the files you downloaded by using tools like md5sum.

If you use Jupyter Notebook, do not forget to download the data, as specified on the first cell of Chapter02/Working_with_FASTQ.ipynb. If not, download the SRR003265.filt.fastq.gz file, which is linked in https://github.com/PacktPublishing/Bioinformatics-with-Python-Cookbook-Second-Edition/blob/master/Datasets.ipynb. This is a fairly small file (27 MB) and represents part of the sequenced data of a Yoruban female (NA18489). If you refer to the 1,000 Genomes Project, you will see that the vast majority of FASTQ files are much bigger (up to two orders of magnitude bigger).

The processing of FASTQ sequence files will mostly be performed using Biopython.