Adventures with ONT MinION at MBL’s Microbial Diversity Course

My time here at the Microbial Diversity Course at MBL has come to an end after visiting for the past 2 weeks with our lab’s MinION to sequence genomes from bacterial isolates that students collected from the Trunk River in Woods Hole, MA. (Photo by Jared Leadbetter of ‘Ectocooler’ Tenacibaculum sp. isolated by Rebecca Mickol).

13782048_10207768876873119_9151405278597781237_n (1)

In Titus Brown’s DIB lab, we’ve been pretty excited about the Oxford Nanopore Technologies MinION sequencer for several reasons:

1) It’s small and portable. This make the MinION a great teaching tool! You can take it to workshops. Students can collect samples, extract DNA, library prep, sequence, and learn to use assembly and annotation bioinformatics software tools within the period of 1 week.

2.) We’re interested in developing streaming software that’s compatible with the sequencing ->  find what you want -> stop sequencing workflow.

3.) Long reads can be used in a similar way as PacBio data to resolve genome and transcriptome (!) assemblies from existing  Illumina data.

Working with any new technology, especially from a new company, requires troubleshooting. While Twitter posts are cool, they tend to make it seem very easy. There is a MAP community for MinION users, but a login is required and public searching is not possible. In comparison to Illumina sequencing, there is not that much experience out there yet.

Acknowledgements

These are usually saved for the end, but since this is a long blog post, thought I would front-load on the gratitude.

I have really benefitted from blog posts from Keith Robison and lonelyjoeparker. Nick Loman and Josh Quick’s experiences have also been beneficial.

There is no match, though, for having people in person to talk to about technical challenges. Megan Dennis and Maika Malig at UC Davis have provided amazing supportive guidance for us in the past few months with lab space and sharing their own experiences with the MinION. I’m very grateful to be at UC Davis and working with Megan.

This trip was made possible by support from my PI, Titus Brown, who provided funding for my trip and all the flowcells and reagents for the MinION sequencing. It was necessary to have this 2 week block of time to focus on nothing else but getting the MinION to work, ask questions, and figure out what works (and what doesn’t).

Special thanks to Rebecca Mickol and Kirsten Grond in the Microbial Diversity course for isolating and culturing the super cool bacterial samples. Scott Dawson at UC Davis (faculty at the Microbial Diversity course) was instrumental in helping with DNA extractions. Jessica Mizzi assisted with library prep protocol development and troubleshooting. Harriet Alexander assisted with the assembly, library prep and showing me around Woods Hole, which is a lovely place to visit. Thank you also to the MBL Microbial Diversity Course, Hilary Morrison and the Bay Paul Center for hosting lab space for this work to take place.

20160806_172019

Files: https://github.com/ljcohen/dib_ONP_MinION

Presentation slides: https://docs.google.com/presentation/d/1Zqd1ayumdZqYc5e8bfeul8-57trKGwGdOFUdPc2-mIU/edit?usp=sharing

Immediately following the Woods Hole visit at MBL, I went to the MSU Kellogg Biological Station as a TA for NGS 2016 course and wrote a tutorial for analyzing ONP data:

http://angus.readthedocs.io/en/2016/analyzing_nanopore_data.html

Purchasing and Shipping

Advice: allow 2-3 months for ordering. We ordered one month in advance. While ONP customer service probably worked overtime to send our flowcells and Mk1B after several emails, chats, and calling in special favors, in the future it is unclear whether we can count on a scheduled delivery with students. Communication required ~dozen emails and we could never get confirmation that flowcells would arrive in time for the course. It turns out that our order had been shipped and arrived on time, however we did not know about it because a tracking number was not sent to us. It took about a day of emailing and waiting to track the boxes down. Thankfully, the boxes were stored properly in the MBL shipping warehouse.

Communicate with ONP constantly. Stay on top of shipments, ask for tracking numbers and confirmation of shipment. Find out where the shipment is being delivered, as the address you’ve entered may not be the one on the shipping box and your order will be delivered to the wrong place.

Flowcells

QC the flowcells immediately. Bubbles are bad:

20160728_143638

We ordered 7 flowcells (5 + 2 that came with the starter pack).The flow cells seemed to have inconsistent pore numbers and some arrived with bubbles. One flow cell had zero pores. They sent us a replacement for this flowcell within days, which was very helpful. However, for the flowcells that had bubbles, I was given instructions by ONP technical staff to draw back a small 15 ul vol of fluid to try to remove the bubble, then QC again. This did not work. The performance of these flowcells did not meet our expectations.

In communicating with the company, we were told that there was no warranty on the flowcells.

DNA Extractions

The ONP protocol says that at least  500-1000 ng clean DNA is required for successful library prep. Try to aim for more than this. Try to get as much DNA, as high molecular weight as possible. Be careful with your DNA. Do not mix liquids by pipetting. For the bacterial isolates from liquid culture, Scott Dawson recommended using Qiagen size exclusion columns to purify, and this worked really well for us. We started with ~2000 ug and used the FFPE repair step.

The ONP protocol includes shearing with the Covaris gtube to 8kb. When I eliminated this step to preserve longer strands, there was little to no yield and samples with adequate yield had poor sequencing results. In communicating with ONP about this, we suspected that the strands were shearing on their own somewhere during the multiple reactions, then either getting washed away during the bead cleanup steps, or the tether and hairpin adapters were sheared off so the strands were not being recognized by the pores.

We sequenced all three sets of DNA below (ladder 1-10kb). The Maxwell prep (gel below on the left) had a decent library quantity but the sequencing read lengths were not as long as we would have liked, which makes sense given the small smeary bands seen. (poretools stats report)

Gels.png

Library prep

When we first started troubleshooting the MinION, the protocols available through the MAP were difficult to follow in the lab. We needed a sheet to just print out and follow from the bench, so we created this:

https://docs.google.com/document/d/1EvxAyJFRu96_caWBEpCcQc7rfRV_Zm1LudScZA8lWCU/edit?usp=sharing

A few months ago, ONP came out with a pdf checklist for library prep, which is great:

https://github.com/ljcohen/dib_ONP_MinION/blob/master/protocols_manuals/ONP_MinION_lib_prep_protocol_SQK-NSK007.pdf

The library prep is pretty straight forward. One important thing I learned about the NEB Blunt/TA Master Mix:

Library prep and loading samples onto the flowcell can be tricky and nerve wracking for those who are not comfortable with lab work. I have >4 yrs of molecular lab experience knowing how to treat reagents, quick spins, pipetting small volumes, how to be careful not to waste reagents. One important point to convey to those who do not do molecular lab work often, is the viscous, sticky enzyme mixes that come in glycerol. You think you’re sucking up a certain volume, but an equal amount is often stuck to the outside of your pipette tip. You have to wipe it on the side of the tube to get it off so you don’t add this to your rxn volume, changing the optimal concentration and (probably the most important) also wasting reagent.

Other misc. advice:

  • The calculation: M1V1 = M2V2 is your friend.
  • Don’t mix by pipetting.
  • Instead, tap or flick the tube with care.
  • Quick spin your tubes often to ensure liquid is collected down at the bottom.
  • Bead cleanups require patience and care while pipetting.
  • Be really organized with your tubes (since there are a handful of reagent tubes that all look the same). Use a checklist and cross off each time you have added a reagent.

These are the things I take for granted when I’m doing lab work on a regular basis. It takes a while to remember when I’m in the lab again after taking a hiatus to work on computationally-focused projects.

image-20160730_122755

Computer Hardware

In October 2015 last year when we were ordering everything to get set up, the computer hardware requirements for the MinION were: 8GB RAM and 128 SSD harddrive with i7 CPU. This is what we ended up ordering (which took several weeks to special order from the UC Davis computer tech center):

DH Part#: F1M35UT Manufacturer: HP Mfr #: F1M35UT#ABA HP ZBook 15 G2 15.6″ LED Mobile Workstation ­ Intel Core i7 i7­4810MQ Quad­core (4 Core) 2.80 GHz 8 GB DDR3L SDRAM RAM ­ 256 GB SSD ­ DVD­Writer ­ NVIDIA Quadro K1100M 2 GB ­ Windows 7 Professional 64­bit (English) upgradable to Windows 8.1 Pro ­ 1920 x 1080 16:9 Display ­ Bluetooth ­ English Keyboard ­ Wireless LAN ­ Webcam ­ 4 x Total USB Ports ­ 3 x USB 3.0 Ports ­ Network (RJ­45) ­ Headphone/Microphone Combo Port

One run requires around 30-50 GB, depending on the quality of the run. The .fast5 files are large, even though the resulting .fastq are small (<1 GB). The hard-drive on our MinION laptop is 256 GB, which can fill up fast. We bought a 2 TB external hard-drive, which we can configure Metrichor to download the reads to after basecalling, saving space on the laptop hard-drive.
Software and Data
  • Windows sucks
  • There’s a new GUI (graphical user interface) for MinKnow in the past months. It’s annoying to get used to this, but in general not too bad.
  • The poretools software to convert .fast5 to .fastq is buggy on Windows and does not play well with MinKnow. There’s probably a way to get them both to work, but I’ve already spent ~2-4 hrs of troubleshooting this issue, so am done with this for now. Instead, we’ve been uploading .fast5 to a Linux server, then running poretools on there.
  • MinKnow python scripts crash sometimes during the run! You can open the MinKnow software again, start the script again, and it should start the run from where it left off.
  • Use the 48 hr MinKnow script for sequencing.
  • Our flow of data goes from raw signal from the MinION (laptop) -> upload to Metrichor server for basecalling -> download to external hard-drive (“pass” or “fail” depending on the Metrichor workflow chosen, e.g. 1D or 2D or barcoding) -> plug external hard-drive to Linux or Linux laptop (for some reason this is easier on Linux laptop rather than Windows…) for transfer to Linux server -> on the Linux server, run poretools software to convert to fastq/fasta -> analysis
  • This all seems kind of ridiculous. If there is a better way, please let us know!

13900322_10104181243686033_7995956997637711388_n (1)

Workshops

In a future workshop setting, where students are doing this for the first time but we have more experience now, a potential schedule could go something like this:

Day 1: Collect sample, culture

Day 2: Extract DNA, run on gel, quantify

Day 3: Library prep, sequence (this will be a long day)

Day 4: Get sequences, upload, assess reads, start assembly

Day 5: Evaluate assembly, Annotate

This is similar to the schedule arranged for Pore Camp, run by Nick Loman at the University of Birmingham in the UK. They have some great materials and experiences to share:

http://porecamp.github.io/

Cost

  • Still unknown what the cost is per sample.
  • Cost of troubleshooting?

I’ve put together a quick ONP MinION purchasing sheet:

https://docs.google.com/spreadsheets/d/1yBncz75kgwExCXy7sC9LsMaDGs8OJJJGg9f4o3DcoQE/edit?usp=sharing

Generally, these are the items to purchase:

  • Mk1B starter pack  came with 2 flowcells
  • computer
  • ONP reagents
  • third-party reagents (NEB)

Getting Help

  • MAP community has some answers
  • There is no phone number to call ONP. In contrast, Illumina has a fantastic customer service phone line, with well-trained technicians on the other end to answer emergency phone calls. Reagents and flowcells are expensive. When you’re in the lab and there is a problem, like a bubble on the flowcell or a low pore number after QC, it is often necessary to call and talk to a person on the phone to ask question so you don’t waste time or money.
  • I’ve had many good email conversations with ONP tech support, but there is no substitute to calling someone on the phone and discussing a problem. Often, there are things to work on after the email and it is difficult to follow up by going back and forth with email.
  • LiveChatting feature on the ONP website is great! (During UK business hours, there is a feature at the bottom of the store website that says “Do you have a question?”. During off hours it says “Leave a message”.

I realized through this process that I had lots questions and few answers. The MAP has lots of forum questions but few manuals. Phrase searching sucks. If you search for a phrase in quotes, it will still search for individual words. For example:

MAP_failed_script_search.png

Remaining Questions:

1. Why does the number of flow cell pores fluctuate? What is the optimal pore number for a flow cell?

2. What is the effect of 1D reads on the assembly? Can we use the “failed” reads for anything? 

3. How long will a run take? 

4. How much hard-disk space is required for one run?

5. When are the reads “passing” and when are they “failing”? Is there value to the failing reads? 

6. How can we get the most out of the flow cells? There seem to be a lot of unknowns related to the efficiency of the flowcells. We tried re-using a washed flow cell. There were >400 pores in the flow cell during the initial QC. After we loaded the library and started the run, the pore numbers were in the 80s-100s. 2 hrs later, this number dropped down to ~30s. I added more library, and the pore numbers never increased again. Is this a result of the pore quality degrading? The next morning, loaded more library again. Not much change. Decided to switch flowcells and try a new one.

7. Are there batch effects of library prep and/or flowcells? Should we be wary of combining reads from multiple flowcells?

Future

In the future, the aim is to move away worrying about the technology details and focus on the data analysis and what the data mean. The goal should be to focus on the biology and why we’re interested in sequencing anything and everything. What can we do with all of this information, now that we can sequence a genome of a new bacterial species in a week?

Feel free to comment and contact!

20160806_172243

Posted in biotech, Sequencing | 3 Comments

Computing Workflows for Biologists – Dr. Tracy Teal

I’m so excited to be visiting the Microbial Diversity Course at the Marine Biological Lab in Woods Hole, Massachusetts right now. Really enjoying talking to students and faculty working on projects related to microbial communities, aspects of microbial metabolism, microbial genomics, transcriptomics. I’m here with our lab‘s MinION to sequence genomes from cultured microorganisms isolated by students during the course. (More about this in a future blog post!)

View of Eel Pond from MBL St.

20160728_183118

Each day of the course, there are lectures in the morning on a variety of interesting topics relevant to microbial diversity. For those not familiar, this field is rapidly accumulating and analyzing large collections of data. For example, see Raza and Luheshi (2016).

Dr. Tracy Teal, Executive Director of Data Carpentry gave us an inspiring talk this morning on data analysis, reproducibility and sharing.

20160729_111138

Read her paper, which summarizes these topics:

Shade and Teal. 2015. Computing Workflows for Biologists: A Roadmap. PLoS Biology. doi:10.1371/journal.pbio.1002303

She raises a number of interesting points and gives good advice relevant to the growing amount of data in biology, so wanted to write them down to share here.

Dr. Teal opens with the question: “How many people use computers for your work?” Everyone in the room raised their hand.

We all use our computers for some aspect of our research.

The reasons for using good practices for data management and computer usage are not just for the greater good, but for you. And your sanity. We all appreciate how much data even one project can generate. This is not going to change in the future. There is an upward trend of data production over time. Thinking about this and planning now will help the future you. Even if you rely on others for the bulk of the data analysis.

How many of you work with other people?” Everyone works in a team in a lab and sometimes with outside collaborators. There is generally a need to communicate with others about data analyses so that someone else besides you can understand what you did. Paper reviewers and readers should be able to understand. But first, there are the people in your lab. This is called the “leaving science forever” test, where you ask yourself whether what you are doing could be followed by someone else if you were to suddenly leave. Have you ever taken over a project from someone where you found files, samples and notebooks were not descriptive enough for you to just pick up from where they left off? Don’t wait until this happens. The more transparent and vigilant you are about this on a regular basis, the happier you will be in the future.

What knowledge and elements are necessary for these good practices?

Metadata:

  1. How were data generated?
  2. Where are raw data located? (e.g. HPLC files, *.txt files, *.fastq sequence files, microarray *.cel files, etc)
  3. What were the data cleaning steps? (e.g. formatting steps between raw data and doing something interesting with software. This is actually a HUGE part of data analysis pipelines and can be >80% of your work. If you can automate these steps, the better off you will be in the future.)
  4. Steps of the data analysis: exact parameters used, software versions
  5. Final plots and charts: This is the least important. If you keep track of the other steps, you should be able to recreate the exact plots very easily.

Let’s talk about data.

Keep raw data files raw. Make copies of raw files before you start to work with the data. Post these files somewhere public, in a place where they will not be deleted. Why not make them public? If you don’t want to do that, put them in a safe lockbox, but where someone else can access them if needed.

How many people have a data management plan? If a lab has a policy where data have to be placed, besides someone’s personal hard-drive, the information will have a greater chance of surviving past the time when people leave the lab.

Let’s talk about spreadsheets.

Have you ever done something in an Excel spreadsheet that made you sad? We all have. Single columns get resorted rather than whole sheet. Autocorrecting spelling will change gene names. Dates get messed up. MS Excel makes these formatting mistakes. Google sheets makes the same formatting mistakes.

http://www.datacarpentry.org/spreadsheet-ecology-lesson/

Train yourself to think like a computer.

There are rules for using Excel. This may seem silly, but following these rules will actually save you and collaborators much time. People know spreadsheets. Many biologists use spreadsheets in a way that is time-consuming in the long-run, e.g. laying out information to be read for humans, with color-coding and notes.

Follow these simple rules:

  • Put each variable into a separate column
  • Do not use color to convey information. Add a “calibrated” column and a one- or two-word code associated, e.g. YES or NO, EXTRACTED or NOT, etc.
  • Do not use Excel data files to write out long metadata notes about your file. This is best to be saved in another README file.
  • Leave raw data raw. If you’re going to transform data or perform a calculation, create a new file or a separate column(s)
  • Break data down into the finest scale resolution to give you the most options. Don’t combine multiple types of information into one column, e.g. Species-Sex, Month-Year. One simple trick to avoid the annoying auto-formatting of dates in Excel: use three separate columns for month, date, year. This will allow you to look at date ranges, e.g. only fall, easily pull out years, or 15th of every month. Gives more flexibility!
  • Export your .xls into a .csv to avoid errors in downstream analyses

If you need more motivation for why it’s a good idea train yourself to follow these Excel rules, this is a great list of all the common errors that spreadsheets can make:

http://www.datacarpentry.org/2015-05-29-great-plains/spreadsheet-ecology/02-common-mistakes.html

Proceeding with analysis:

Good data organization is the foundation for any project. Without this, none of the actual meaningful aspects of the project will be easy or efficient and data analyses will drag on and on.

  1. What is your motivation, overarching goal of analysis? To test hypotheses? Exploratory?
  2. Adopt automation techniques to reduce errors, which are iterative patterns that don’t rely on human input
  3. Reproducibiltiy checkpoints
  4. Taking good notes
  5. Sharing responsibility, team approach

Motivation

Hopefully your experimental design was set up to motivate different strategies, hypothesis-testing vs. exploratory. Write out each step of the workflow by hand. Just asking yourself, “What am I going to do now?” can help to guide a workflow.

Reproducibility checkpoints, scrutinizing integrity of analyses:

Modularize your workflow and set up checkpoints at certain points to make sure you have what you expect. Does it actually work? Is the outcome is consistent? (some programs have stochastic element) Do the results make biological sense?

Examples of negative consequences for having problems with code and research that is not reproducible:

fMRI results:

http://www.economist.com/news/science-and-technology/21702166-two-studies-one-neuroscience-and-one-palaeoclimatology-cast-doubt

Clinical genetics:

http://www.theatlantic.com/science/archive/2015/12/why-human-genetics-research-is-full-of-costly-mistakes/420693/

Unfortunately, there are probably many other examples… (I’m interested in these, so please feel free to comment and share.)

Reproducibility and data management plans are now score-able in grant reviews and peer review. This is starting to be valued more in the research community.

This is difficult. No one is perfect. You get to decide what your values are. We have opportunities to set norms in our communities for what we see.

Take good notes

Include this information:

  • Software version
  • Description of what software is doing/goal
  • What are the default options?
  • Brief notes on deviation  from default options
  • Workflows: Include a progression using different software (e.g. PANDAseq -> QIIME –> R). See Figure 1 from Shade and Teal (2015).
  • ALL formatting steps required to move between tools. (Write a tutorial for others. This is a good example.) Avoid manually formatting data. Ideally, a script will be written and made available to automatically re-format data.
  • Anything else that will help you remember what you did
  • Most important person to explain your process to is you in 6 months. Unfortunately, you from 6 months ago will not answer email. If you need to re-do something, you need to remember what you did.

When writing a paper, go through your workflow again. Start from the beginning and make sure you can do again what you thought you did. Make sure you can reproduce. We rarely have the opportunity to do this with lab work because it’s too expensive. But we can do this with computational analyses!

These things take time. It’s easy to fling data everywhere. Being organized takes time and is less easy. Value this.

Shared responsibility

Shared responsibility enhances reproducible workflows. Holding each other accountable for high-quality results, confidence in results promotes a strong sense of collaboration. Some general advice:

  1. Shared storage and workspace can facility access to all group data. Within a lab group, it is VERY common to have different computers (each lab member usually has one, for example). Institutional shared drives are maintained by administrators and occasionally need to be deleted to preserve space.
  2. No one is perfect.Not backing files up, or knowing where files or code are, are common mistakes. It happens. It’s easy to throw hands up in the air and complain or shame each others’ work habits related to all topics we’re discussing here. Shame is less productive than learning from mistakes, growing and discussing as a group. Use these opportunities to productively grow together. Few people have malicious intent. We’re all people. Work together to make productive, positive changes.
  3. Talk to data librarians at institutions. (Advocate for starting such a position is this person does not exist.)
  4. Share data. Dr. C. Titus Brown advocates for publishing all pieces of data publicly on figshare. Half of peoples’ problems with data stem from the desire to keep data private until publishing. This is usually >3 yrs from time of collection. Then you can’t find it. Or you spend too much time trying to make it “perfect”. Publish the data as soon as you collect it. Then you can go back and improve data annotations. When you do a “data dump”, your name will be associated with those data. Chances of people being malicious, wanting to steal your data are almost unheard of. (If you have examples, would be interesting to hear.) There is almost never a reason NOT to publish data as soon as it’s collected. Publishing data as soon as it is collected is a great way to advertise what you are doing so others can collaborate or not go down the same avenue if unproductive.
  5. Join data working groups
  6. Using version control repositories for code and data analyses (github)
  7. Set expectations for ‘reproducibility checkpoints’ with team “hackathons” or open-computer group meetings dedicated to analysis
  8. Lab paper reviews focused on data reproducibility
  9. Look for help/support outside the lab, e.g. bioinformatics or user group office hours, Stack Overflow, BioStars. You are not alone. Few people are alone in wanting to learn things. We never can know everything, so talk to people.

Bioinformatics resources:

https://github.com/mblmicdiv/course2016/blob/master/bioinfo-resources.md

If you see a typo or problem with tutorials, please let people know.🙂

Here is an exercise to try!

https://github.com/datacarpentry/2015-08-24-ISU/blob/gh-pages/lessons/00-intro-to-data-tidy.md

View of Eel Pond from Water St.

20160726_085430

Posted in Data Analyses, reproducibility, science, talks, workshops | Leave a comment

Marine Microbes! What to do with all the data?

UPDATE: Check out Titus’ blog post, Bashing on monstrous sequencing collections.

Since Sept 2015, I’ve been a PhD student in C. Titus Brown’s lab at UC Davis working with data from Moore’s Marine Microbial Eukaryotic Transcriptome Sequencing Project (MMETSP). I would like to share some progress on that front from the past 6 months. Comments welcome!

MMETSP is a really unique and valuable data set consisting of 678 cultured samples with 306 species representing more than 40 phyla (Keeling et al 2014).  It is public, available on NCBI. The MMETSP data set consists entirely of cultured samples submitted by a large consortium of PIs to the same sequencing facility. All samples were PE 100 reads run on an Illumina HiSeq 2000 sequencing instrument. A few samples were run on a GAIIx.

For many species in this set, this is the only sequence data available because reference genomes are not available. The figure below from Keeling et al. 2014 shows the diverse relationships between samples represented in the MMETSP. The dashed lines indicate groups without reference genome whereas the solid lines have references.

10.1371-journal.pbio.1001889.g002

Here are a few stars (Micromonas pusilla – left, Thalassiosira pseudonana – right):

colored-2a thalassiosira-pseudonanana-n-kroger-tu-dresden

It’s worth emphasizing that this is – if not THE, one of the – largest public, standardized RNAseq datasets available from a diversity of species. Related to this cool dataset, I’m really grateful for a number of things: a.) the MMETSP community who has taken the initiative to put this sequencing dataset together, b.) the Moore Data Driven Discovery program for funding, c.) to be working with a great PI who is willing and able to focus efforts on these data, d.) being in a time when working with a coordinated nucleic acid sequencing data set from such a large number of species is even possible. 

Automated De Novo Transcriptome Assembly Pipeline

The NCGR has already put together de novo transcriptome assemblies of all samples from this data set with their own pipeline. Part of the reason why we decided to make our own pipeline was that we were curious to see if ours would be different. Also, because I’m a new student, developing and automating this pipeline has been a great way for me to learn about automating pipeline scripts, de novo transcriptome assembly evaluation, and the lab diginorm khmer software. We’ve assembled just a subset of 56 samples so far, not the entire data set yet. It turns out that our assemblies are different from NCGR. (More on this at the bottom of this post.)

All scripts and info that I’ve been working with are available on github. The pipeline is a modification of the first three steps of the Eel Pond mRNAseq Protocol to run on an AWS instance. (I’m aware that these are not user-friendly scripts right now, sorry. Working on that. My focus thus far has been on getting these to be functional.)

  1. download raw reads from NCBI
  2. trim raw reads, check quality
  3. digital normalization with khmer
  4. de novo transcriptome assembly with Trinity
  5. compare new assemblies to existing assemblies done by NCGR

The script getdata.py takes a metadata file downloaded from NCBI (SraRunInfo.csv), see screenshot below for how to obtain this file:

screenshot_2015-10-041

The metadata file contains info such as run (ID), download_path, ScientificName, and SampleName. These are fed into a simple Python dictionary data structure, which allows for looping and indexing to easily access and run individual processes on these files in an automated and high-throughput way (subset of the dictionary data structure shown below):

dictionary

Each subsequent script (trim_qc.py, diginorm_mmetsp.py, assembly.py, report.py, salmon.py) uses this dictionary structure to loop through and run commands for different software (trimmomatic, fastqc, khmer, Trinity, etc). Assemblies were done separately for each sample, regardless of how they were named. This way, we will be able to see how closely assemblies cluster together or separately agnostic of scientific naming.

Challenges

There have been several challenges so far in working with this public data set.

It might seem simple in retrospect, but it actually took me a long time to figure out how to grab the sequencing files, what to call them, and how to connect names of samples and codes. The SraRunInfo.csv file available from NCBI helped us to translate SRA id to MMETSP id and scientific names, but figuring this out required some poking around and emailing people.

Second, for anyone in the future who is in charge of naming samples, small deviations from naming convention, e.g. “_” after the sample name, can mess up automated scripts. For example,

MMETSP0251_2
MMETSP0229_2

had to be split with the following lines of code:

mmetsp=line_data[position_mmetsp]
test_mmetsp=mmetsp.split("_")
if len(test_mmetsp)>1:
    mmetsp_id=test_mmetsp[0]

Resulting in this:

['MMETSP0251', '2']
['MMETSP0229', '2']

Then I grabbed the first entry of the list so that they looked like the rest of the MMETSP id without the “_”. Not really a big deal, but it created a separate problem that required some figuring out. My advice is to pick one naming convention then name all of the files with the same exact structure.

Lastly, several of the records in the SraRunInfo.csv were not found on the NCBI server, which required emailing with SRA.

not_found

The people affiliated with SRA who responded were incredibly helpful and restored the links.

Assembly Comparisons

I used the transrate software for de-novo transcriptome assembly quality analysis to compare our assemblies with the NCGR assemblies (*.cds.fa.gz files). Below are frequency distributions of proportions of reference contigs with Conditional Reciprocal Best-hits Blast (CRBB), described in Aubry et al. 2014. The left histogram below shows our “DIB” contigs compared to NCGR. On the right are NCGR contigs compared to DIB contigs. This means that we have assembled almost everything in their assemblies plus some extra stuff!

p_ref_CRBB_dib_v_ncgrp_ref_CRBB_ncgr_v_dib

Here is the same metric shown in a different way:

p_ref_CRBB_violin_plots

We’re not sure whether the extra stuff we’ve assembled is real, so we plan to ground truth a few assemblies with a few reference genomes to see.

For further exploration of these contig metrics, here are some static notebooks:

If you would like to explore these data yourself, here is an interactive binder link that lets you run the actual graph production (thanks to Titus for creating this link!):

http://mybinder.org/repo/dib-lab/MMETSP

Outcomes

Based on these findings, several questions have come up about de novo transcriptome assemblies. Why are there different results from different pipelines? Are these differences significant? What can this tell us about other de novo transcriptome assemblies? Having distributions of quality metrics from assemblies is a unique situation. Usually, assemblies are done by one pipeline for just one species at a time, so means and standard deviations are not available. There are increasingly more new de novo transcriptome assemblies being done and published by different groups worldwide for species x, y, z. Yet, evaluations of the qualities of the assemblies are not straight-forward. Is it worth developing approaches, a prioritized set of metrics that will allow any de novo assembly to be evaluated in a standard way? 

Moving forward, the plan is to:

  • keep working on this assembly evaluation problem,
  • assemble the rest of the samples in this data set,
  • make these data and pipeline scripts more user-friendly and available,
  • standardize annotations across species to enable meaningful cross-species analyses and comparisons.

Questions for the Community

What analyses would be useful to you?

What processed files are useful to you? (assembly fasta files, trimmed reads, diginorm reads)

The idea is to make this data-intensive analysis process easier for the consortium of PI who put these data together so that discoveries can be proportional to the data collected. In doing so, we want to make sure we’re on the right track. If you’re reading this and are interested in using these data, we’d like to hear from you!

Special thanks to C. Titus Brown, Harriet Alexander and Richard Smith-Unna for guidance, insightful comments, and plots!

Also, the cat:

Posted in Grad School, science | 2 Comments

Intro git – Lab meeting

By the end of this meeting, you will be able to:

  • Create a new repository
  • Edit readme.md (markdown lesson from Reid Brennan)
  • clone to local desktop
  • make changes
  • commit and push changes
  1. Create a new repository

Click on the “New” button to create a new repository:

new_repository

Name the repository whatever you would like. Examples: test, data, lab protocols, awesome killifish RNA extractions, significant genes lists, abalone data files, etc. The idea is this will be your repository/directory with version-controlled files that you will pull/push back and forth between your computer and github. Click on the “Initialize this repository with README”:

create_repo

You have created a new repository!

repo_created

2. Edit the README.md markdown file (Reid)

3. clone directory to local desktop

To copy the url, click on the clipboard-like icon next to the web address for this repository (see below). Sidenote: this is the same web address you can use to share this repository with colleagues. You can also just copy the url from the web address in your browser.

clone

Open your terminal, navigate to a directory where you would like to put the new repository. Type this command to “clone” the repository:

git clone https://github.com/ljcohen/super_awesome_killifish_data.git

local_commandline.png

You should see the “Cloning into ___” like in the screenshot above. Use the ‘ls’ command to list the contents of the current working directory to make sure it’s there. It is!

ls.png

4. Make changes to the git directory

Now, we can make changes to this directory and they will be tracked. First, change directories into the one you just created:

cd super_awesome_killifish_data

Let’s copy a file into this directory. (This is a small text file I had in one directory up from the current one, so I use ../ to indicate where it will be found then . to indicate that I want to copy it to the current directory.)

cp ../cluster_sizes.txt .

5. Commit and push changes

Now that you have made a change to this directory, you want to make sure they are saved to github. The following commands are standard for staging and push changes to github repository:

git status
git add --all
git commit -m "added cluster_sizes.txt for A_xenica"
git push

(Type in your github user name and password. The letters you type in might not show up on the screen, but they are getting typed in, don’t worry!)

add_file

Now, you can go to the web github and see the changes made:

changes_github.png

References:

Posted in github | Leave a comment

Fish Gill Cell Culture System, MCB263 science communication assignment

My tweet for this week’s MCB263 science communication assignment is about a new technique by researchers at King’s College London, published in Nature Protocols, using freshwater gill tissue in culture as a proxy for in vivo fish gill tissue which is commonly used in toxicity testing to assess whether compounds have negative biological effects. Since gill tissue is designed to filter water and extract oxygen and osmolytes for the fish, it is a great method for measuring toxicity of compounds in water since they get trapped in the gill tissue. In this fish gill cell culture system by Schnell et al. (2016), whole fish do not have to be sacrificed for this purpose. Instead, only the gill tissue is used to assess potential toxicity, bioaccumulation, or environmental monitoring. Not only is this system resource-efficient and portable, only requiring a thin layer of tissue and some reagents to maintain, but it does not require raising and maintaining whole animals. The elegant features of gill tissue can be used without waste.

This video demonstrates the technique:

My group’s tweets:

Laura Perilla-Henao: market resistance to synthetic malaria drug

Ryan KawakitaE. coli used to generate morphine precursor

Stephanie Fung: researchers publish in Cell exploring microbiome evolution from hunter-gather to modern western society

Prema Karunanithi: release of real-time data on Zika virus infection study in monkeys

Posted in Articles, science | Leave a comment