Computing Workflows for Biologists – Dr. Tracy Teal

I’m so excited to be visiting the Microbial Diversity Course at the Marine Biological Lab in Woods Hole, Massachusetts right now. Really enjoying talking to students and faculty working on projects related to microbial communities, aspects of microbial metabolism, microbial genomics, transcriptomics. I’m here with our lab‘s MinION to sequence genomes from cultured microorganisms isolated by students during the course. (More about this in a future blog post!)

View of Eel Pond from MBL St.


Each day of the course, there are lectures in the morning on a variety of interesting topics relevant to microbial diversity. For those not familiar, this field is rapidly accumulating and analyzing large collections of data. For example, see Raza and Luheshi (2016).

Dr. Tracy Teal, Executive Director of Data Carpentry gave us an inspiring talk this morning on data analysis, reproducibility and sharing.


Read her paper, which summarizes these topics:

Shade and Teal. 2015. Computing Workflows for Biologists: A Roadmap. PLoS Biology. doi:10.1371/journal.pbio.1002303

She raises a number of interesting points and gives good advice relevant to the growing amount of data in biology, so wanted to write them down to share here.

Dr. Teal opens with the question: “How many people use computers for your work?” Everyone in the room raised their hand.

We all use our computers for some aspect of our research.

The reasons for using good practices for data management and computer usage are not just for the greater good, but for you. And your sanity. We all appreciate how much data even one project can generate. This is not going to change in the future. There is an upward trend of data production over time. Thinking about this and planning now will help the future you. Even if you rely on others for the bulk of the data analysis.

How many of you work with other people?” Everyone works in a team in a lab and sometimes with outside collaborators. There is generally a need to communicate with others about data analyses so that someone else besides you can understand what you did. Paper reviewers and readers should be able to understand. But first, there are the people in your lab. This is called the “leaving science forever” test, where you ask yourself whether what you are doing could be followed by someone else if you were to suddenly leave. Have you ever taken over a project from someone where you found files, samples and notebooks were not descriptive enough for you to just pick up from where they left off? Don’t wait until this happens. The more transparent and vigilant you are about this on a regular basis, the happier you will be in the future.

What knowledge and elements are necessary for these good practices?


  1. How were data generated?
  2. Where are raw data located? (e.g. HPLC files, *.txt files, *.fastq sequence files, microarray *.cel files, etc)
  3. What were the data cleaning steps? (e.g. formatting steps between raw data and doing something interesting with software. This is actually a HUGE part of data analysis pipelines and can be >80% of your work. If you can automate these steps, the better off you will be in the future.)
  4. Steps of the data analysis: exact parameters used, software versions
  5. Final plots and charts: This is the least important. If you keep track of the other steps, you should be able to recreate the exact plots very easily.

Let’s talk about data.

Keep raw data files raw. Make copies of raw files before you start to work with the data. Post these files somewhere public, in a place where they will not be deleted. Why not make them public? If you don’t want to do that, put them in a safe lockbox, but where someone else can access them if needed.

How many people have a data management plan? If a lab has a policy where data have to be placed, besides someone’s personal hard-drive, the information will have a greater chance of surviving past the time when people leave the lab.

Let’s talk about spreadsheets.

Have you ever done something in an Excel spreadsheet that made you sad? We all have. Single columns get resorted rather than whole sheet. Autocorrecting spelling will change gene names. Dates get messed up. MS Excel makes these formatting mistakes. Google sheets makes the same formatting mistakes.

Train yourself to think like a computer.

There are rules for using Excel. This may seem silly, but following these rules will actually save you and collaborators much time. People know spreadsheets. Many biologists use spreadsheets in a way that is time-consuming in the long-run, e.g. laying out information to be read for humans, with color-coding and notes.

Follow these simple rules:

  • Put each variable into a separate column
  • Do not use color to convey information. Add a “calibrated” column and a one- or two-word code associated, e.g. YES or NO, EXTRACTED or NOT, etc.
  • Do not use Excel data files to write out long metadata notes about your file. This is best to be saved in another README file.
  • Leave raw data raw. If you’re going to transform data or perform a calculation, create a new file or a separate column(s)
  • Break data down into the finest scale resolution to give you the most options. Don’t combine multiple types of information into one column, e.g. Species-Sex, Month-Year. One simple trick to avoid the annoying auto-formatting of dates in Excel: use three separate columns for month, date, year. This will allow you to look at date ranges, e.g. only fall, easily pull out years, or 15th of every month. Gives more flexibility!
  • Export your .xls into a .csv to avoid errors in downstream analyses

If you need more motivation for why it’s a good idea train yourself to follow these Excel rules, this is a great list of all the common errors that spreadsheets can make:

Proceeding with analysis:

Good data organization is the foundation for any project. Without this, none of the actual meaningful aspects of the project will be easy or efficient and data analyses will drag on and on.

  1. What is your motivation, overarching goal of analysis? To test hypotheses? Exploratory?
  2. Adopt automation techniques to reduce errors, which are iterative patterns that don’t rely on human input
  3. Reproducibiltiy checkpoints
  4. Taking good notes
  5. Sharing responsibility, team approach


Hopefully your experimental design was set up to motivate different strategies, hypothesis-testing vs. exploratory. Write out each step of the workflow by hand. Just asking yourself, “What am I going to do now?” can help to guide a workflow.

Reproducibility checkpoints, scrutinizing integrity of analyses:

Modularize your workflow and set up checkpoints at certain points to make sure you have what you expect. Does it actually work? Is the outcome is consistent? (some programs have stochastic element) Do the results make biological sense?

Examples of negative consequences for having problems with code and research that is not reproducible:

fMRI results:

Clinical genetics:

Unfortunately, there are probably many other examples… (I’m interested in these, so please feel free to comment and share.)

Reproducibility and data management plans are now score-able in grant reviews and peer review. This is starting to be valued more in the research community.

This is difficult. No one is perfect. You get to decide what your values are. We have opportunities to set norms in our communities for what we see.

Take good notes

Include this information:

  • Software version
  • Description of what software is doing/goal
  • What are the default options?
  • Brief notes on deviation  from default options
  • Workflows: Include a progression using different software (e.g. PANDAseq -> QIIME –> R). See Figure 1 from Shade and Teal (2015).
  • ALL formatting steps required to move between tools. (Write a tutorial for others. This is a good example.) Avoid manually formatting data. Ideally, a script will be written and made available to automatically re-format data.
  • Anything else that will help you remember what you did
  • Most important person to explain your process to is you in 6 months. Unfortunately, you from 6 months ago will not answer email. If you need to re-do something, you need to remember what you did.

When writing a paper, go through your workflow again. Start from the beginning and make sure you can do again what you thought you did. Make sure you can reproduce. We rarely have the opportunity to do this with lab work because it’s too expensive. But we can do this with computational analyses!

These things take time. It’s easy to fling data everywhere. Being organized takes time and is less easy. Value this.

Shared responsibility

Shared responsibility enhances reproducible workflows. Holding each other accountable for high-quality results, confidence in results promotes a strong sense of collaboration. Some general advice:

  1. Shared storage and workspace can facility access to all group data. Within a lab group, it is VERY common to have different computers (each lab member usually has one, for example). Institutional shared drives are maintained by administrators and occasionally need to be deleted to preserve space.
  2. No one is perfect.Not backing files up, or knowing where files or code are, are common mistakes. It happens. It’s easy to throw hands up in the air and complain or shame each others’ work habits related to all topics we’re discussing here. Shame is less productive than learning from mistakes, growing and discussing as a group. Use these opportunities to productively grow together. Few people have malicious intent. We’re all people. Work together to make productive, positive changes.
  3. Talk to data librarians at institutions. (Advocate for starting such a position is this person does not exist.)
  4. Share data. Dr. C. Titus Brown advocates for publishing all pieces of data publicly on figshare. Half of peoples’ problems with data stem from the desire to keep data private until publishing. This is usually >3 yrs from time of collection. Then you can’t find it. Or you spend too much time trying to make it “perfect”. Publish the data as soon as you collect it. Then you can go back and improve data annotations. When you do a “data dump”, your name will be associated with those data. Chances of people being malicious, wanting to steal your data are almost unheard of. (If you have examples, would be interesting to hear.) There is almost never a reason NOT to publish data as soon as it’s collected. Publishing data as soon as it is collected is a great way to advertise what you are doing so others can collaborate or not go down the same avenue if unproductive.
  5. Join data working groups
  6. Using version control repositories for code and data analyses (github)
  7. Set expectations for ‘reproducibility checkpoints’ with team “hackathons” or open-computer group meetings dedicated to analysis
  8. Lab paper reviews focused on data reproducibility
  9. Look for help/support outside the lab, e.g. bioinformatics or user group office hours, Stack Overflow, BioStars. You are not alone. Few people are alone in wanting to learn things. We never can know everything, so talk to people.

Bioinformatics resources:

If you see a typo or problem with tutorials, please let people know. 🙂

Here is an exercise to try!

View of Eel Pond from Water St.


Posted in Data Analyses, reproducibility, science, talks, workshops | Leave a comment

Marine Microbes! What to do with all the data?

UPDATE: Check out Titus’ blog post, Bashing on monstrous sequencing collections.

Since Sept 2015, I’ve been a PhD student in C. Titus Brown’s lab at UC Davis working with data from Moore’s Marine Microbial Eukaryotic Transcriptome Sequencing Project (MMETSP). I would like to share some progress on that front from the past 6 months. Comments welcome!

MMETSP is a really unique and valuable data set consisting of 678 cultured samples with 306 species representing more than 40 phyla (Keeling et al 2014).  It is public, available on NCBI. The MMETSP data set consists entirely of cultured samples submitted by a large consortium of PIs to the same sequencing facility. All samples were PE 100 reads run on an Illumina HiSeq 2000 sequencing instrument. A few samples were run on a GAIIx.

For many species in this set, this is the only sequence data available because reference genomes are not available. The figure below from Keeling et al. 2014 shows the diverse relationships between samples represented in the MMETSP. The dashed lines indicate groups without reference genome whereas the solid lines have references.


Here are a few stars (Micromonas pusilla – left, Thalassiosira pseudonana – right):

colored-2a thalassiosira-pseudonanana-n-kroger-tu-dresden

It’s worth emphasizing that this is – if not THE, one of the – largest public, standardized RNAseq datasets available from a diversity of species. Related to this cool dataset, I’m really grateful for a number of things: a.) the MMETSP community who has taken the initiative to put this sequencing dataset together, b.) the Moore Data Driven Discovery program for funding, c.) to be working with a great PI who is willing and able to focus efforts on these data, d.) being in a time when working with a coordinated nucleic acid sequencing data set from such a large number of species is even possible. 

Automated De Novo Transcriptome Assembly Pipeline

The NCGR has already put together de novo transcriptome assemblies of all samples from this data set with their own pipeline. Part of the reason why we decided to make our own pipeline was that we were curious to see if ours would be different. Also, because I’m a new student, developing and automating this pipeline has been a great way for me to learn about automating pipeline scripts, de novo transcriptome assembly evaluation, and the lab diginorm khmer software. We’ve assembled just a subset of 56 samples so far, not the entire data set yet. It turns out that our assemblies are different from NCGR. (More on this at the bottom of this post.)

All scripts and info that I’ve been working with are available on github. The pipeline is a modification of the first three steps of the Eel Pond mRNAseq Protocol to run on an AWS instance. (I’m aware that these are not user-friendly scripts right now, sorry. Working on that. My focus thus far has been on getting these to be functional.)

  1. download raw reads from NCBI
  2. trim raw reads, check quality
  3. digital normalization with khmer
  4. de novo transcriptome assembly with Trinity
  5. compare new assemblies to existing assemblies done by NCGR

The script takes a metadata file downloaded from NCBI (SraRunInfo.csv), see screenshot below for how to obtain this file:


The metadata file contains info such as run (ID), download_path, ScientificName, and SampleName. These are fed into a simple Python dictionary data structure, which allows for looping and indexing to easily access and run individual processes on these files in an automated and high-throughput way (subset of the dictionary data structure shown below):


Each subsequent script (,,,, uses this dictionary structure to loop through and run commands for different software (trimmomatic, fastqc, khmer, Trinity, etc). Assemblies were done separately for each sample, regardless of how they were named. This way, we will be able to see how closely assemblies cluster together or separately agnostic of scientific naming.


There have been several challenges so far in working with this public data set.

It might seem simple in retrospect, but it actually took me a long time to figure out how to grab the sequencing files, what to call them, and how to connect names of samples and codes. The SraRunInfo.csv file available from NCBI helped us to translate SRA id to MMETSP id and scientific names, but figuring this out required some poking around and emailing people.

Second, for anyone in the future who is in charge of naming samples, small deviations from naming convention, e.g. “_” after the sample name, can mess up automated scripts. For example,


had to be split with the following lines of code:

if len(test_mmetsp)>1:

Resulting in this:

['MMETSP0251', '2']
['MMETSP0229', '2']

Then I grabbed the first entry of the list so that they looked like the rest of the MMETSP id without the “_”. Not really a big deal, but it created a separate problem that required some figuring out. My advice is to pick one naming convention then name all of the files with the same exact structure.

Lastly, several of the records in the SraRunInfo.csv were not found on the NCBI server, which required emailing with SRA.


The people affiliated with SRA who responded were incredibly helpful and restored the links.

Assembly Comparisons

I used the transrate software for de-novo transcriptome assembly quality analysis to compare our assemblies with the NCGR assemblies (*.cds.fa.gz files). Below are frequency distributions of proportions of reference contigs with Conditional Reciprocal Best-hits Blast (CRBB), described in Aubry et al. 2014. The left histogram below shows our “DIB” contigs compared to NCGR. On the right are NCGR contigs compared to DIB contigs. This means that we have assembled almost everything in their assemblies plus some extra stuff!


Here is the same metric shown in a different way:


We’re not sure whether the extra stuff we’ve assembled is real, so we plan to ground truth a few assemblies with a few reference genomes to see.

For further exploration of these contig metrics, here are some static notebooks:

If you would like to explore these data yourself, here is an interactive binder link that lets you run the actual graph production (thanks to Titus for creating this link!):


Based on these findings, several questions have come up about de novo transcriptome assemblies. Why are there different results from different pipelines? Are these differences significant? What can this tell us about other de novo transcriptome assemblies? Having distributions of quality metrics from assemblies is a unique situation. Usually, assemblies are done by one pipeline for just one species at a time, so means and standard deviations are not available. There are increasingly more new de novo transcriptome assemblies being done and published by different groups worldwide for species x, y, z. Yet, evaluations of the qualities of the assemblies are not straight-forward. Is it worth developing approaches, a prioritized set of metrics that will allow any de novo assembly to be evaluated in a standard way? 

Moving forward, the plan is to:

  • keep working on this assembly evaluation problem,
  • assemble the rest of the samples in this data set,
  • make these data and pipeline scripts more user-friendly and available,
  • standardize annotations across species to enable meaningful cross-species analyses and comparisons.

Questions for the Community

What analyses would be useful to you?

What processed files are useful to you? (assembly fasta files, trimmed reads, diginorm reads)

The idea is to make this data-intensive analysis process easier for the consortium of PI who put these data together so that discoveries can be proportional to the data collected. In doing so, we want to make sure we’re on the right track. If you’re reading this and are interested in using these data, we’d like to hear from you!

Special thanks to C. Titus Brown, Harriet Alexander and Richard Smith-Unna for guidance, insightful comments, and plots!

Also, the cat:

Posted in Grad School, MMETSP, science | 3 Comments

Intro git – Lab meeting

By the end of this meeting, you will be able to:

  • Create a new repository
  • Edit (markdown lesson from Reid Brennan)
  • clone to local desktop
  • make changes
  • commit and push changes
  1. Create a new repository

Click on the “New” button to create a new repository:


Name the repository whatever you would like. Examples: test, data, lab protocols, awesome killifish RNA extractions, significant genes lists, abalone data files, etc. The idea is this will be your repository/directory with version-controlled files that you will pull/push back and forth between your computer and github. Click on the “Initialize this repository with README”:


You have created a new repository!


2. Edit the markdown file (Reid)

3. clone directory to local desktop

To copy the url, click on the clipboard-like icon next to the web address for this repository (see below). Sidenote: this is the same web address you can use to share this repository with colleagues. You can also just copy the url from the web address in your browser.


Open your terminal, navigate to a directory where you would like to put the new repository. Type this command to “clone” the repository:

git clone


You should see the “Cloning into ___” like in the screenshot above. Use the ‘ls’ command to list the contents of the current working directory to make sure it’s there. It is!


4. Make changes to the git directory

Now, we can make changes to this directory and they will be tracked. First, change directories into the one you just created:

cd super_awesome_killifish_data

Let’s copy a file into this directory. (This is a small text file I had in one directory up from the current one, so I use ../ to indicate where it will be found then . to indicate that I want to copy it to the current directory.)

cp ../cluster_sizes.txt .

5. Commit and push changes

Now that you have made a change to this directory, you want to make sure they are saved to github. The following commands are standard for staging and push changes to github repository:

git status
git add --all
git commit -m "added cluster_sizes.txt for A_xenica"
git push

(Type in your github user name and password. The letters you type in might not show up on the screen, but they are getting typed in, don’t worry!)


Now, you can go to the web github and see the changes made:



Posted in github | Leave a comment

Fish Gill Cell Culture System, MCB263 science communication assignment

My tweet for this week’s MCB263 science communication assignment is about a new technique by researchers at King’s College London, published in Nature Protocols, using freshwater gill tissue in culture as a proxy for in vivo fish gill tissue which is commonly used in toxicity testing to assess whether compounds have negative biological effects. Since gill tissue is designed to filter water and extract oxygen and osmolytes for the fish, it is a great method for measuring toxicity of compounds in water since they get trapped in the gill tissue. In this fish gill cell culture system by Schnell et al. (2016), whole fish do not have to be sacrificed for this purpose. Instead, only the gill tissue is used to assess potential toxicity, bioaccumulation, or environmental monitoring. Not only is this system resource-efficient and portable, only requiring a thin layer of tissue and some reagents to maintain, but it does not require raising and maintaining whole animals. The elegant features of gill tissue can be used without waste.

This video demonstrates the technique:

My group’s tweets:

Laura Perilla-Henao: market resistance to synthetic malaria drug

Ryan KawakitaE. coli used to generate morphine precursor

Stephanie Fung: researchers publish in Cell exploring microbiome evolution from hunter-gather to modern western society

Prema Karunanithi: release of real-time data on Zika virus infection study in monkeys

Posted in Articles, science | Leave a comment

Downloading masses of files, useful ftp commands

You have finished a sequencing project and now your sequencing facility is sending you lots of files! You’re eager to see your results. Your facility sends you a link with files displayed on a website. Now what are you supposed to do?

First, log in to your hpc server (AWS or HPC cluster) where you will work with the files.

Depending on the type of server where your files have been made available (ftp or http), try these commands:

If ftp server, with user and password required:

wget -r --user XXX --password XXX ftp://1234.5678.10

You can also log in to the ftp server, navigate and change directories, then mget to copy multiple files from the remote server to your local server. (Although, this will sometimes take longer than wget for some reason.) (The -P 2121 is optional, leave it out if you don’t know the port.)

ncftp -u XXX -p XXX -P 2121
cd /path/to/files
mget *

If http or https server, with user and password. The  -r is recursive, -l1 is only level 1, and –no-parent means no files up. (These stop wget from automatically downloading files linking files being downloaded, which can happen with websites.)

wget -r -l1 --no-parent --user=user --password=password

With no user or password required:

wget -r -l1 --no-parent --no-check-certificate

Most facilities are friendly and will help you download your files. In some cases, they will provide temporary ssh access, but other times not. It depends on the facility. If you don’t know the name of the ftp or http server, or the directory where the files are located, write to them and ask.

Thanks to Luiz Irber and Dragos Scarlet for the help and motivation for working through this problem. 🙂

Posted in Linux | Leave a comment