Having no genotype data and only meta-analysis data to proceed on, we want to find all SNPs in linkage disequilibrium (LD) with a significant result from compiled GWAS. So if we have a dbSNP rs#, sweet tools like SNAP http://www.broadinstitute.org/mpg/snap/ldsearch.php can nail down all SNPs in Hapmap 2/3 that have some LD with such a SNP. Does a comparable tool or relatively simple means exist for doing this on 1000G data? Or am I in for a trip to the dentist?
Thanks everyone. I signed up for the 1000G seminar at ASHG as well. I'm curious, Larry, what sort of format do you load the data into Haploview or HelixTree? We have the phased haplotypes. The Asian (CHB+JPT) dataset we're interested in as of the June 2010 freeze has 62 individuals in it so far. My colleague also indicates that he has downloaded ped files for PLINK from the MACH website. I have not tried it yet, but the link is here.
ADD REPLY
• link
updated 5.2 years ago by
Ram
44k
•
written 14.1 years ago by
Ryan D
★
3.4k
0
Entering edit mode
Well, I wrote what we >would< do - we don't do this yet. And it is my colleagues you would be ones to run those data. I'll have to ask. Check back later...
This has been answered elsewhere by now, but thanks to all who gave input: It requires downloading tabix and vcftools. Use tabix to download the relevant region from 1kG like so:
ADD REPLY
• link
updated 5.2 years ago by
Ram
44k
•
written 12.9 years ago by
Ryan D
★
3.4k
2
Entering edit mode
With PLINK 1.9, vcftools step can be dropped, as plink can read vcf files as input. So it would be just 2 lines tabix to get the vcf file and then plink to get ld:
Thanks for pointing this out with plink 1.9. I suppose it wasn't asked in the OPs question, but a way to get the classic "LD triangle" is to select a region of vcf with tabix and then run plink 1.9 with --r2 triangle (or --r2 square, then you can just plot a heatmap). 1000 genomes data also produces a pretty "sparse" corrleations with all snps included too, so maybe add --maf 0.01 or similar to filter very rare snps (--write-snplist will then tell you the list of variants that were used)
ADD REPLY
• link
updated 5.2 years ago by
Ram
44k
•
written 8.3 years ago by
cmdcolin
★
4.0k
1
Entering edit mode
cool, thanks a ton. +2 (unfortunately not possible)
I tried this using a bed file of GWAS snps but I got the following error "ti_index_core] the indexes overlap or are out of bounds". I used hg19_gwas_snps.bed and hg19 1kg data ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20110521/
I tried 1st tabix command using a bed file of GWAS snps but I got the following error "ti_index_core] the indexes overlap or are out of bounds". I used hg19_gwas_snps.bed and hg19 1kg data ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20110521/
VCFtools outputs alleles in the order that they are found in the VCF file, but PLINK doesn't preserve phase information. The phase will be lost the the order is preserved in the file. So in theory it could be recovered.
The minimum number of individuals that you need to calculate LD with confidence is 50 and they must be unrelated. Using data from 100 individuals would be much better, also unrelated. Our tool of choice is HelixTree from GoldenHelix. My colleague says that Haploview will also work. The input data are the variants; you don't need the invariant sequence. We would use a length of sequence from 200 to 500 kbp on either side of the region/gene of interest. Yes this is big, but until you calculate LD, you don't know how far it stretches from the variant(s) of interest.
of course if you plan to study a particular region programs like HelixTree or Haploview will do, but if you want to have LD measures across the entire human genome there is no program yet (none that I am aware of) that can handle the large variation (~14M variants) covered by this project. but I definitely see that focusing in particular regions is, as a workaround, an intelligent approach.
Hey folks--I just wanted to add a couple of tidbits from the 1000G tutorial session at ASHG.
The whole session was videotaped. It will eventually be available on genome.gov and 1000G site. I will keep an eye out for that and let you know. Their slide presentations will also be available.
In Paul Flicek's session he showed LD data in the browser on the 1000G site, but his slide says that it is "Currently based on data from HapMap and Perlegen populations" and "Populations selectable from drop down tab". I haven't had a chance to look for this yet and I have my crappy road computer so I won't even bother right now.
Someone specifically asked about LD on this data, and Paul answered that no LD tools exist right now for this. So I would say if you are going to wrestle with this it will definitely be toothache time.
PS: someone talked about having downloaded the variation files (?) on the day the paper was released, and that it seemed to be a subset. But they said in the session that a new file had gone up at 2pm the afternoon of that session and it was MUCH bigger--so if you haven't looked recently you might want to check out the files again. There were also supposedly changes to the browser that day as well.
If you look up a single SNP in the Ensembl genome browser, there is a linkage disequilibrium tab on the left. 3 of the populations listed are from 1000 genomes pilot data. I am not sure what the inner workings are so if anyone can clarify what Ensembl is posting that would be great, but the few LD calculations I have checked appear correct. Even for SNPs not in Hapmap. Ensembl is the best tool for this I have seen so far.
Here is a Makefile.in which I used to generate LD data from 1kG phase1 data. It can do everything from downloading the vcf files to calculation LD using Intersnp or Plink. Some perl scripts and binaries are missing, but that should serve as an example at least. One can use make -j 8 to run the processes in parallel, but make download should have run first.
as far as I know, there are no tools yet working on 1000 Genomes data that deal with LD. the official browser allows only browsing the data, but the kind of information we are used to by HapMap is not yet present, although I do not doubt that they will soon provide it.
we were in a similar situation months ago, and we had in fact to adapt our own tool for browsing population statistics SPSmart in order to accept data from 1000 Genomes, and therefore to extract allele frequencies and Fst values from their pilot datasets. I guess you should start preparing your teeth...
Hello, I'm liking all the methods mentioned here, but still it is not as user friendly for non bioinformaticians as the SNAP tool. I'm still looking for some program/script where I can just paste a list of SNPs (500+) and get all the european population SNPs r2 >0.8 as output... Hope someone can help me out!
vcftools --vcf chr22.phase1_release_v3.20101123.snps_indels_svs.genotypes.refpanel.EUR.vcf --out TRY_OUT.plink --plink
> VCFtools - v0.1.7 (C) Adam Auton 2009
>
> Parameters as interpreted: --vcf
> chr22.phase1_release_v3.20101123.snps_indels_svs.genotypes.refpanel.EUR.vcf
> --out TRY_OUT.plink --plink
>
> VCF index is older than VCF file. Will regenerate. Building new index
> file. Scanning Chromosome: 22 Warning - file contains entries with
> the same position. This is not supported by vcftools, and may cause
> unexpected behaviour.
>
> Writing Index file. File contains 232005 entries and 379 individuals.
> Applying Required Filters. After filtering, kept 379 out of 379
> Individuals After filtering, kept 232005 out of a possible 232005
> Sites Writing PLINK PED file ... Writing PLINK MAP file ... Done. Run
> Time = 28.00 seconds
plink --file TRY_OUT.plink --r2 --inter-chr --ld-snp-list chr22snp.txt --ld-window-r2 0.8 --out TRY_chr22_LD_r08 --noweb
@----------------------------------------------------------@
| PLINK! | v1.07 | 10/Aug/2009 |
|----------------------------------------------------------|
| (C) 2009 Shaun Purcell, GNU General Public License, v2 |
|----------------------------------------------------------|
| For documentation, citation & bug-report instructions: |
| http://pngu.mgh.harvard.edu/purcell/plink/ |
@----------------------------------------------------------@
Skipping web check... [ --noweb ]
Writing this text to log file [ TRY_chr22_LD_r08.log ]
Analysis started: Thu Jan 16 15:19:43 2014
Options in effect:
--file TRY_OUT.plink
--r2
--inter-chr
--ld-snp-list chr22snp.txt
--ld-window-r2 0.8
--out TRY_chr22_LD_r08
--noweb
232005 (of 232005) markers to be included from [ TRY_OUT.plink.map ]
Warning, found 379 individuals with ambiguous sex codes
These individuals will be set to missing ( or use --allow-no-sex )
Writing list of these individuals to [ TRY_chr22_LD_r08.nosex ]
379 individuals read from [ TRY_OUT.plink.ped ]
0 individuals with nonmissing phenotypes
Assuming a disease phenotype (1=unaff, 2=aff, 0=miss)
Missing phenotype value is also -9
0 cases, 0 controls and 379 missing
0 males, 0 females, and 379 of unspecified sex
Before frequency and genotyping pruning, there are 232005 SNPs
379 founders and 0 non-founders found
Total genotyping rate in remaining individuals is 1
0 SNPs failed missingness test ( GENO > 1 )
0 SNPs failed frequency test ( MAF < 0 )
After frequency and genotyping pruning, there are 232005 SNPs
After filtering, 0 cases, 0 controls and 379 missing
After filtering, 0 males, 0 females, and 379 of unspecified sex
Writing LD statistics to [ TRY_chr22_LD_r08.ld ]
Analysis finished: Thu Jan 16 15:21:03 2014
Interesting. I'm interested in if LD have been already computed by someone for these genomes at all? Would it be computationally feasible?
I don't see that happening until the dataset is frozen, until no more individuals will be sequenced and added. Yes, I think it would be feasible.
Thanks everyone. I signed up for the 1000G seminar at ASHG as well. I'm curious, Larry, what sort of format do you load the data into Haploview or HelixTree? We have the phased haplotypes. The Asian (CHB+JPT) dataset we're interested in as of the June 2010 freeze has 62 individuals in it so far. My colleague also indicates that he has downloaded ped files for PLINK from the MACH website. I have not tried it yet, but the link is here.
Well, I wrote what we >would< do - we don't do this yet. And it is my colleagues you would be ones to run those data. I'll have to ask. Check back later...
Some data exists on this LD. I ended up pulling data from files with code like this:
Hi Ryan, where did you find the LD data on the 1000 genome site? Could you put the link to the file? Would help me big time. Thanks