Brian Krueger is the owner, creator and coder of LabSpaces by night and Next Generation Sequencer by day. In his blog you will find articles about technology, molecular biology, and editorial comments on the current state of science on the internet.
My posts are presented as opinion and commentary and do not represent the views of LabSpaces Productions, LLC, my employer, or my educational institution.
Please wait while my tweets load
As the second day of AGBT kicked off, it became quite clear that this meeting would be dominated by medical genomics. There were a few talks sprinkled in about gut or sewer microbiomes but the vast majority of the talks the last two days have been on clinical genomic sequencing. This is fine by me since it’s exactly what we do in the Genomic Analysis Facility in Duke’s Center for Human Genome Variation. It’s really nice to see how other centers are approaching these problems. Unfortunately, this is one of the few opportunities we have to peek into each other’s operations.
Yesterday’s first talk was by Russ Altman of Stanford University. Russ has been a leader in the field of pharmacogenomics and he presented his work on developing the Pharmacogenomics Knowledgebase (PharmGKB, pharmGKB.org). He led by saying, “Don’t ever give a talk about a website,” and in his case it was true because WiFi in the conference room was down for the majority of his talk. He urged the crowd to follow along on the website, but only those of us with a cell connection could join him. Russ pointed out the major drawbacks of using GWAS and SNP chips for obtaining information about pharmacoenomic associations and joined pretty much everyone else in saying that the standard these days really should be at the very least to whole exome sequence patients. His website, PharmGKB is a curated database of all of the published variant/drug interaction data that can be used in the downstream analysis of a patient’s genome to better understand how they might respond to particular drugs. As an example, he paged through Stephen Quake’s genome pointing out how some of the variants could predict drug response but his overall theme was that we really need more population wide data about genome variation to really take advantage of personalized medicine.
Christine Eng of Baylor College of Medicine presented a talk about the launch of their CLIA certified sequencing lab detailing the clinical aspects of exome sequencing and diagnosis at Baylor. She said that at Baylor, they have whole exome sequenced 450 patients and had a 27% disease discovery rate. This number drew criticism both on twitter and in the questions, but Eng clarified the discovery rate by saying that their team is only considering discoveries if they are absolutely positive that the mutation is causal. This makes sense in a clinical environment. At Duke, our Center has a discovery rate much higher than that, but many of the gene discoveries are novel and although we’re pretty certain they are causal, we have to do further follow-up experiments in animals or tissue culture to truly be sure. Eng also presented a cost breakdown of their process and said that their “package” was around $7000 an exome from preparation to analysis; however, this seemed a little high considering single exomes run about $700 these days. At Baylor, the final data is analyzed by multiple physicians before a final diagnosis is made so maybe this adds significantly to the cost.
Eng’s talk was followed by a talk from Elizabeth Worthey from the Medical College of Wisoncsin on clinical whole genome sequencing. Worthey reported that whole genome sequencing at MCW of course captures more variants than whole exome sequencing but most of these variants are not clinically valuable. One of the most interesting aspects of Worthey’s talk was when she spoke about the speed of the process. It’s hard to get insurance companies to pay for a test that may not provide a diagnosis and take 3 months to process. In the clinic they find that using the new rapid techniques and streamlining their bioinformatics and sample handling with a LIMS can bring their time to diagnosis down to between 2 and 4 weeks. Worthey along with Eng both presented case studies highlighting the benefit of using sequencing to obtain a diagnosis. It’s both frustrating and encouraging to hear these stories about families that have spent tens of thousands of dollars searching for a diagnosis and coming up empty handed. It’s very rewarding to be able to sequence a patient and provide them with the information they need in a cost effective manner. At the CHGV, we have found that in our own clinical (Research! Not CLIA certified) process that if a patient has a suspected genetic disorder that isn’t determined after 2 visits then it’s more economical to just trio sequence the family.
Unfortunately, I only saw two of the afternoon talks but they were both on clinical sequencing topics. The first of these was from Stephen Kingsmore of Children’s Mercy Hospital who recently published (last October) a phenomenal paper on rapid diagnosis of genetic disease in a neonatal intensive care unit. The technique used employs an illumina HiSeq 2500 to perform rapid genome sequencing on a patient who is thought to have a genetic disorder. Using current technology, samples can be prepared and sequenced in 24 hrs. The current bottleneck for gene sequencing is analysis which can take anywhere from a couple of days to a week or more. Kingsmore’s group got around this problem by using what he calls STAT-seq and instead of looking for gene variants across the entire genome, his tool only looks for variants in the genes known to be causal for genetic disease, or 3677 genes. This significantly decreases the analysis time and his team can now go from sample to diagnosis in 48 hrs. One other interesting point that Kingsmore made was that geneticists really need to move away from Sanger sequencing for validation and showed that in some of his diagnostic cases, Sanger sequencing wrongly confounded the diagnosis. Other researchers are coming to the same conclusion and many have switched over to using techniques such as digital PCR or bead arrays to confirm sequencing based diagnoses.
The last talk I went to was from Jonathan Berg at UNC. I liked the idea of Berg’s talk, but the paternalistic nature of it really annoyed me. It seems like geneticists these days fall into two camps: Give patients only the genomic information we think is relevant or let the patient choose how much information they’d like to know. The reasoning in the first camp is that genetic information is complicated and if doctors can barely understand what the data mean then what’s the point in scaring the crap out of a patient by giving them all of the data to trudge through. This is especially true in these early days when the causal value of a variant is low or unknown. Berg suggests using a binning/scoring system for variant reporting. He assigns scores to variants based on what is known about them, the genes, and the treatments. The final score determines whether the finding will be reported to the patient. Of course, you can set a score threshold and report more or less information, but Berg cautions that we should do our best to not report incidental findings: most of the variation in the genome is meaningless. I agree with him on the second part, but I still believe the patient should have access to as much of their genetic information as they desire. Many of the fears about releasing genetic information to patients can be solved through education and interaction with researchers or genetic counselors. The paternalistic view that we should only release what we think is relevant is a huge mistake. It feels extremely draconian, like taking away Galileo’s telescope and telling him that looking at the sky is only for the big kids. The genome is not something to be feared, it’s something to be explored. If we provide patients with the right tools and the right information we may not only be able to better inform them about their disease, but we might also be able to teach them something about science, biology, and genetics in the process.
This post has been viewed: 944 time(s)