Sunday, May 14, 2017

Unique on one column and keep maximum value from another column in Unix

If you have a file, e.g.

head id_merge_ens_length.txt
ENSMUSG00000095309 924
ENSMUSG00000000126 3318
ENSMUSG00000000126 3320
ENSMUSG00000086196 1381
ENSMUSG00000086196 2127
ENSMUSG00000054418 1784
ENSMUSG00000095268 1960
ENSMUSG00000082399 480
ENSMUSG00000097090 1650
ENSMUSG00000020063 3353
And want to unique on first column while keeping the maximum on second column, first use sort to sort on column 1 followed by reverse numerical sort on column 2:

sort -k1,1 -k2,2nr file.txt | head
ENSMUSG00000000001 3262
ENSMUSG00000000003 902
ENSMUSG00000000003 697
ENSMUSG00000000028 2143
ENSMUSG00000000028 1747
ENSMUSG00000000028 832
ENSMUSG00000000031 2286
ENSMUSG00000000031 1853
ENSMUSG00000000031 935
ENSMUSG00000000031 817
Now, to select the column with maximum value for each unique entry from column one, use awk. Use awk '!a[$1]++' as a short version of awk '{if(! a[$1]){print; a[$1]++}}' meaning if the current first field ($1) is not present in the a array, which will happen if it is a unique field, print the line and add the 1st field to the a array. Next time we see the same field, it will be already in the array and it will not be printed due to the condition if(!a[$1]).

sort -k1,1 -k2,2nr file.txt | awk '!a[$1]++' | head
ENSMUSG00000000001 3262
ENSMUSG00000000003 902
ENSMUSG00000000028 2143
ENSMUSG00000000031 2286
ENSMUSG00000000037 4847
ENSMUSG00000000049 1190
ENSMUSG00000000056 4395
ENSMUSG00000000058 2733
ENSMUSG00000000078 4217
ENSMUSG00000000085 3544

Thursday, May 11, 2017

Defining and calculating genome coverage, depth and breadth of coverage

Average estimation of the whole genome coverage in a sequencing assay (i.e. depth of coverage) is calculated with the formula:


coverage = (read count * read length ) / total genome size
Similarly you can calculate the estimation of the coverage for each gene or gene locus:


coverage of the gene= (gene read count * read length ) / gene size 
In reality of the sequencing experiment, distribution of the mapped reads is uneven and in additon not all portions of the reads will map to the genome, therefore for each nucleotide you will calculate specific read coverage (per base read coverage), for example using bedtools genomecov:

bedtools genomecov -ibam file.bam -g my.genome -d | head

chr1  6  0
chr1  7  0
chr1  8  0
chr1  9  0
chr1  10 0
chr1  11 1
chr1  12 1
chr1  13 1
chr1  14 1
chr1  15 1
On the other side, breadth of coverage is a different term than depth of coverage and relates to the proportion of the genome that is covered with reads. Both breadth and depth of coverage correlate to the sequencing depth i.e. the number of reads generated in the sequencing experiment.

For details see publication:

Wednesday, April 26, 2017

Awk code for counting isoforms in abundance.tsv files from Kallisto

If you have a series of abundance.tsv from Kallisto RNA-Seq quantification tool separated in different folder use this command to count protein coding isoforms in each output file

sudo find . -name abundance.tsv -exec sh -c "echo {}; grep protein_coding {} | wc -l"  \;
./1522_1hr_TNF_2/abundance.tsv
79795
./1522_6hr_TNF_2/abundance.tsv
79795
./1522_6hr_TGFB_2/abundance.tsv
79795
./1522_1hr_PMA_1/abundance.tsv
79795
./2989_6hr_TNF_2/abundance.tsv
79795
./2989_6hr_SF_1/abundance.tsv

...
To save the list of protein coding isoforms in each separate file use:

sudo find . -name abundance.tsv -exec sh -c "grep protein_coding {} > {}.protein_coding"  \;

To count isoforms that are not expressed use awk code that counts rows with est_counts==0

find . -type f -name abundance.tsv -exec sh -c 'awk "\$4==0{print \$0}" "{}" | wc -l' \;
108088
109840
113075
108092
119730
106363
110323
119521
117195
117358
98931

...
Similarly to count isoforms that are expressed use:

find . -type f -name abundance.tsv -exec sh -c 'awk "\$4!=0{print \$0}" "{}" | wc -l' \;
90532
88780
85545
90528
78890
92257
88297
79099
81425
81262
99689
91844
89495
...
List their sample names contained in the folder names:

find . -type f -name abundance.tsv.protein_coding -exec sh -c 'echo $(basename $(dirname {}))' \;
1522_1hr_TNF_2
1522_6hr_TNF_2
1522_6hr_TGFB_2
1522_1hr_PMA_1
2989_6hr_TNF_2
2989_6hr_SF_1
1522_6hr_SF_2
2989_6hr_TNF_1
1522_1hr_TGFB_2
1522_1hr_TGFB_1
1522_1hr_PDGF_1
1522_6hr_PDGF_2
1522_6hr_TNF_1
2989_1hr_PDGF_1
...
 Make folders with their sample name:

find /home/mpjanic/HCASMC_RNASeq/ -type f -name abundance.tsv.protein_coding -exec sh -c 'mkdir $(basename $(dirname {}))' \;
Find out how many isoforms are expressed within the subgroup of protein coding isoforms:

find . -type f -name abundance.tsv.protein_coding -exec sh -c 'awk "\$4!=0{print \$0}" "{}" | wc -l' \;
43139
42014
41017
43618
38208
43021
42031
38464
40069
40246
46087
43028
42478

...
Save in each folder files with isoforms that are expressed within the subgroup of protein coding isoforms

find . -type f -name abundance.tsv.protein_coding -exec sh -c 'awk "\$4!=0{print \$0}" "{}" > {}.expressed ' \;

Tuesday, April 4, 2017

Parsing dbSNP for insertions, single nucleotide polymorphisms and large deletions - awk code

If you download dbSNP database file in bed format using dBSNP, MySQL or UCSC Table browser,

(MySQL command for dbSNP147)

mysql --user=genome --host=genome-mysql.cse.ucsc.edu -A -N -D hg19 -e 'SELECT chrom, chromStart, chromEnd, name FROM snp147Common' > snp147Common.bed
you will notice that coordinates of variants can be divided roughly into 3 categories:

1. insertions (same base pair coordinates), 
2. SNPs plus simple deletions (single base pair coordinates), 
and 
3. large deletions (more than 1 base pair difference in the coordinates).

To parse and separate these three categories, use the following awk code, checking if $2 equals $3 and placing the rows that fall into this category into .insertions file; then selecting those rows where $3=$2=1 and placing them in .snp.plus.simple.deletions; and finally selecting those rows that do not fall into the previous two selection criteria, that then have $2 and $3 separated with more than 1 bp difference.


#parse dbSNPs into insertions, SNPs and simple deletions, large deletions

if [ ! -f snp147Common.bed.insertions ]
then
awk '$2 == $3 {print $0}' snp147Common.bed > snp147Common.bed.insertions
fi

if [ ! -f snp147Common.bed.snp.plus.simple.deletions ]
then
awk '$3 == $2+1 {print $0}' snp147Common.bed > snp147Common.bed.snp.plus.simple.deletions
fi

if [ ! -f snp147Common.bed.large.deletions ]
then
awk '{if ($3 != $2+1 && $2 != $3) print $0}' snp147Common.bed > snp147Common.bed.large.deletions
fi
Check the output files whether they satisfy the criteria for selection:


mpjanic@zoran:~/chrPos2rsID$ head snp147Common.bed.insertions
chr1 10177 10177 rs367896724
chr1 10352 10352 rs555500075
chr1 13417 13417 rs777038595
chr1 15903 15903 rs557514207
chr1 54712 54712 rs568927205
chr1 91551 91551 rs375085441
chr1 249275 249275 rs200079338
chr1 255923 255923 rs199745078
chr1 363244 363244 rs572571697
chr1 604229 604229 rs556776674
mpjanic@zoran:~/chrPos2rsID$ head snp147Common.bed.snp.plus.simple.deletions
chr1 11007 11008 rs575272151
chr1 11011 11012 rs544419019
chr1 13109 13110 rs540538026
chr1 13115 13116 rs62635286
chr1 13117 13118 rs62028691
chr1 13272 13273 rs531730856
chr1 14463 14464 rs546169444
chr1 14598 14599 rs531646671
chr1 14603 14604 rs541940975
chr1 14672 14673 rs4690
mpjanic@zoran:~/chrPos2rsID$ head snp147Common.bed.large.deletions
chr1 17358 17361 rs749387668
chr1 63735 63738 rs201888535
chr1 66435 66437 rs560481224
chr1 82133 82135 rs550749506
chr1 129010 129013 rs377161483
chr1 267227 267230 rs374780253
chr1 532325 532327 rs577455319
chr1 612688 612691 rs201365517
chr1 691567 691571 rs566250387
chr1 701779 701783 rs201234755

Wednesday, March 8, 2017

Collapsing transcript expression levels into gene levels by sum, maximum and average - awk code

In case you have generated a file containing gene, transcript (isoform), expression level,

DN0a272187:~ milospjanic$ cat file.txt
GENE1,ISOFORM1,4
GENE1,ISOFORM2,6
GENE1,ISOFORM3,9
GENE2,ISOFORM1,43
GENE2,ISOFORM2,16
GENE3,ISOFORM3,19
GENE3,ISOFORM1,43
GENE3,ISOFORM2,4
GENE4,ISOFORM1,21
Use this awk code to collapse this file and sum on the gene level. Sum all $3 in hash with $1 as a key, then loop through the array and print keys and values.

awk -F, '{array[$1]+=$3} END { for (i in array) {print i"," array[i]}}' file.txt
GENE1,19
GENE2,59
GENE3,66
GENE4,21
To collapse the isoform file and select the top isoform file use this awk code. Use hash max with the key $1 and value $3. If $3 is greater than max change max to new $3 value and put $3 in array. Next, loop through the array and print keys and values. 

awk -F, '{if($3>max[$1]){array[$1]=$3; max[$1]=$3}} END { for (i in array) {print i"," array[i]}}' file.txt
GENE1,9
GENE2,43
GENE3,43
GENE4,21
To plot average use the following code. Create array hash with $1 as keys that sums $3 values, and 'no' hash with $1 as key that increases by 1. Loop through the array and print keys and division of hash 'array' and 'no', which is a sum divided by number of transcripts.

awk -F, '{array[$1]+=$3; no[$1]+=1} END { for (i in array) {print i"," array[i]/no[i]}}'  file.txt
GENE1,6.33333
GENE2,29.5
GENE3,22
GENE4,21

Wednesday, March 1, 2017

Resolving DESeq error: non-numeric argument to mathematical function

Running the following DESeq code,

#!/usr/bin/Rscript
library(DESeq)
data<-read.table("mastertable.diff", header=T, row.names = 1, check.names=F)
meta<-read.delim("meta.data.diff", header=T)
conds<-factor(meta$Condition)
sampleTable<-data.frame(sampleName=colnames(data), condition=conds)
countsTable = data
cds <- newCountDataSet( countsTable, conds)
cds <- estimateSizeFactors( cds )
sizeFactors( cds )
head(counts(cds))
head(counts(cds,normalized=TRUE))
cds = estimateDispersions( cds, method="blind", sharingMode="fit-only" )
str( fitInfo(cds) )
plotDispEsts( cds )
res = nbinomTest( cds, "CTRL-MOCK", "CRISPR-DOWN" )
head(res)
plotMA(res)
hist(res$pval, breaks=100, col="skyblue", border="slateblue", main="")
resSig = res[ res$padj < 0.1, ]
write.csv( res, file="Result_Table.csv" )
write.csv( resSig[ order( resSig$foldChange, -resSig$baseMean ), ] , file="DownReg_Result_Table.csv" )
write.csv( resSig[ order( -resSig$foldChange, -resSig$baseMean ), ], file="UpReg_Result_Table.csv" )
cdsBlind = estimateDispersions( cds, method="blind" )
vsd = varianceStabilizingTransformation( cdsBlind )
library("RColorBrewer")
library("gplots")
select = order(rowMeans(counts(cds)), decreasing=TRUE)[1:250]
hmcol = colorRampPalette(brewer.pal(9, "GnBu"))(100)
heatmap.2(exprs(vsd)[select,], col = hmcol, trace="none", margin=c(10, 6))
select = order(rowMeans(counts(cds)), decreasing=TRUE)[1:500]
heatmap.2(exprs(vsd)[select,], col = hmcol, trace="none", margin=c(10, 6))
select = order(rowMeans(counts(cds)), decreasing=TRUE)[1:1000]
heatmap.2(exprs(vsd)[select,], col = hmcol, trace="none", margin=c(10, 6))
print(plotPCA(vsd, intgroup=c("condition")))
I got an error,

> cds <- newCountDataSet( countsTable, conds)
Error in round(countData) : non-numeric argument to mathematical function
Data frame looked fairly ok,

> head(data)
             up_207_1 up_207_5 up_207_7 up_207_11 up_207_13 up_207_17
ARL14EPL            0        0        0         0         1         0
LOC100169752        0        0        0         0         0         0
CAPN6               0        0        0         0         0         0
IFNA13             19       20       58        14        17        13
HSP90AB1         6030    10422    15058      3030      8696      3848
DACH1               0        0        0         0         0         0
But after checking classes and modes of R objects with sapply, columns were class: factor.

> sapply(data, mode)
 up_207_1  up_207_5  up_207_7 up_207_11 up_207_13 up_207_17 
"numeric" "numeric" "numeric" "numeric" "numeric" "numeric" 
> sapply(data, class)
 up_207_1  up_207_5  up_207_7 up_207_11 up_207_13 up_207_17 
 "factor"  "factor"  "factor"  "factor"  "factor"  "factor" 
Changing the class of columns to numeric,

> numc <- sapply(data, is.factor)
> numc
 up_207_1  up_207_5  up_207_7 up_207_11 up_207_13 up_207_17 
     TRUE      TRUE      TRUE      TRUE      TRUE      TRUE 
> data[numc] <- lapply(data[numc], function(x) as.numeric(as.character(x)))
Warning messages:
1: In FUN(X[[i]], ...) : NAs introduced by coercion
2: In FUN(X[[i]], ...) : NAs introduced by coercion
3: In FUN(X[[i]], ...) : NAs introduced by coercion
4: In FUN(X[[i]], ...) : NAs introduced by coercion
5: In FUN(X[[i]], ...) : NAs introduced by coercion
6: In FUN(X[[i]], ...) : NAs introduced by coercion

Rerunning the script with these lines gave the same error, as NAs were introduced. Then I checked what lines contain NA's in the data frame.

> data[!complete.cases(data),]
     up_207_1 up_207_5 up_207_7 up_207_11 up_207_13 up_207_17
Gene       NA       NA       NA        NA        NA        NA
So in data frame there was an empty line that was assigned NAs after converting with as.numeric. Removing all the lines with NAs,

> data<-na.omit(data)
> data[!complete.cases(data),]
[1] up_207_1  up_207_5  up_207_7  up_207_11 up_207_13 up_207_17
<0 rows> (or 0-length row.names)
Data frame is now numeric class and free of NAs. Rerunning the script was successful.

Wednesday, February 22, 2017

Transfer folder contents via FTP without prompting

If you need to transfer multiple files via ftp from a folder, as folder transfer per se is not supported, go into local folder, connect to ftp server and use mput to place multiple files. The problem with this option is that you would be prompted to confirm each file, that may be a problem for large folders. Thus, to avoid this connect to ftp with -i option.


Miloss-MacBook-Air:~ cd local_folder
Miloss-MacBook-Air:~ milospjanic$ ftp -i sftp.lsnet.ucla.edu
Connected to sftp.lsnet.ucla.edu.
220 (vsFTPd 2.2.2)
Name (sftp.lsnet.ucla.edu:milospjanic): username
331 Please specify the password.
Password: password
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> mput *