Archive for the 'Systems' Category


Criticism of genetics research continues

I’ve posted a couple of times about how the big genetics research explosion during the ’00s has produced a backlash of criticism that the practical results of the big bucks spent of the enterprise have not matched the investment. That theme continues with an article in the Times Online yesterday. Another group of  British and American researchers are basically saying the juice isn’t worth the squeezing.

In biomedical scientific research there’s always a basic tension: 1) spend the money on basic biological research to clear up the many remaining mysteries of living things, or 2) spend it on the most direct applied research to get to specific medical benefits sooner. Basic research is often the long way around in science. It takes longer and doesn’t get to treatments as quickly, but when you get there you’ve got a better picture of biological processes which may have unexpected benefits in other medical areas. The quick results route — as directly as possible from A to B — may get there or it may get waylaid because some basic biology piece is missing. Most funders of medical research hedge their bets by spreading research dollars around in both strategies.

It troubles me when I see one group of scientists devaluing the work of another. When I hear “it’s not worth the money” I often interpret that to mean: “Please, please fund MY research!” or “I told you so!”

Science can be a pretty rough-and-tumble business. Because scientists generally have big vocabularies the verbal blows aren’t as crude as you’d hear on, say, the old Jerry Springer show. But the insults are there.

Also, to call for pulling the plug on genetics research at this point tends to violate a basic principle of science: if you don’t get the results you were expecting with some approach, an even more important question comes up: Why? Ever since the Human Genome Project reached its preliminary results in 2000 a bunch of thorny questions have come up. Why are there so few genes (~30,000-40,000) in the human genome, and how do we get our magnificent selves from so few? What’s with all  this “junk” DNA anyway? Why aren’t there a few glaringly prominent genetic clinkers in common diseases like cancer instead of a bunch of genetic flags whose roles are really hard to sort out?

I don’t have the answers, of course, but there are answers to these fundamental questions about our genes, and they’re important. We’re not going to master the frailties that befall us without answers.

As I’ve said before, it’s too bad that nature isn’t simple. If it were, it would be a lot easier to solve problems that vex us. But, whether it’s biology or physics, deep complexity is a basic feature of reality, and we have little choice but to doggedly track  it down. And, isn’t that something that keeps life interesting anyway?


Overcoming big hurdles in cancer research and treatment

I’ve posted a couple of times [here and here] about the complexity of living systems and the challenges that reality presents for medicine. That’s certainly true for cancer.

But the march toward greater information goes on. A Princeton U press release titled “Scientists find way to catalog all that goes wrong in a cancer cell” on describes an advance in algorithms that make defining the pathways of complex genetic interactions. Researchers “were able to systematically categorize and pinpoint the alterations in cancer pathways and to reveal the underlying regulatory code in DNA.”

Researchers Saeed Tavazoie, Hani Goodarzi, and Olivier Elemento say:

“We are discovering that there are many components inside the cell that can get mutated and give rise to cancer…Future cancer therapies have to take into account these specific pathways that have been mutated in individual cancers and treat patients specifically for that.”

The researchers developed an algorithm, a problem-solving computer program that sorts through the behavior of each of 20,000 genes operating in a tumor cell. When genes are turned “on,” they activate or “express” proteins that serve as signals, creating different pathways of action. Cancer cells often act in aberrant ways, and the algorithm can detect these subtle changes and track all of them. […] The algorithm devised by the group scans the DNA sequence of a given cell — its genome — and deciphers which sequences are controlling what pathways and whether any are acting differently from the norm. By deciphering the patterns, the scientists can conjure up the genetic regulatory code that is underlying a particular cancer.

The goal: developing much more specific therapies targeted to these variation.

That’s cool. It’s great to keep digging for the information to understand and treat cancers. But I have to add — having spent many years in the sometimes idealistic realm of cancer control — there’s another giant issue that has reared it’s ugly head in recent years: cost. When I started my career three decades ago nobody was talking about the economic barriers of cancer therapy development and widespread adoption being such a problem. But the ongoing farcical struggle over reasonably equitable health care access for all Americans (much less the rest of the world) demonstrates that how much cancer treatment costs and who pays for it is a formidable problem in “curing cancer” in its own right. Teasing out the information this research suggests is needed and then delivering treatments from it in practical terms to a big population is going to be a long process.


Life is complex. Sorry.

Here’s part two of why there’s a lot of disappointment that the investment by taxpayers and investors in genetics research hasn’t resulted in a lot of  great treatments for diseases. What has been discovered is that even the most “simple” living things are more complex that thought a few years ago. Studies were published in last week’s Science that take the depth of detail about systems in a simple bacterium to a new level of detail. The bottom line: no living thing is simple.

Since you’ve gotta have a subscription to Science to read the articles (and I don’t) we’ll go on the abstracts and what was reported in Wired about them. Teams of researchers studied Mycoplasma pneumoniae, a bacterium that has only one-fifth the genes of the usual study subject, E. coli. In Brandon Keim’s Wired article he says:

The analysis combined information about gene regulation, protein production and cell structure… It’s far closer to a “blueprint” than a mere genome readout, and reveals processes “that are much more subtle and intricate than were previously considered possible in bacteria,” wrote University of Arizona biologists Howard Ochman and Rahul Raghavan in a commentary…In short, there was a lot going on in lowly, supposedly simple M. pneumoniae, and much of it is beyond the grasp of what’s now known about cell function…“Linear mapping of genes to function rarely considers how a cell actually accomplishes the processes,” wrote Ochman and Raghavan. “There is no such thing as a ’simple’ bacterium.”

The scientists suggest that the techniques can be applied to more commplex organisms with beneficial results. Some skeptics might be thinking, “Yeah, we’ve heard that before.”

It would be great if real life biology were simpler. We might have relief by now from things we dread like cancer. But the truth is that living things are incredibly complex and so are the chronic diseases that afflict us. Life science research and research into cancer, in particular, has been like peeling an onion or opening the so-called Russian dolls: inside each layer we’ve found another layer of complexity. It’s daunting, but there’s little we can do but roll up our sleeves and get back to figuring out the next puzzle.

People are going to have to take it upon themselves to do the preventive and health-sustaining things that are well known like healthy diet and exercise to keep their best health for a lifetime. Eventually the depth of biological mysteries will be puzzled out, but it’s going to continue to be a long trip, and it may never be possible to count on external treatments entirely to give us back good health if we’ve let it slide too long.


More on computer modeling in medical biology

Last week I posted about how a group at the Burnham Institute, et al, had completed a full computer model of the metabolism of a bacterium. That probably doesn’t sound earth-shaking, but advances in computer modeing of biological systems at many levels will be a powerful scientific and medical tools.

So today funding for another modeling project was announced by Mt. Sinai Medical School. They’re getting a federal grant to model kidney tissue. The idea is to get greater understanding of some of the cellular changes that are part of kidney diseases and to learn how to generate kidney tissue through nanofabrication.  As their release puts it:

If successful, the research—which ties together several emerging technologies including virtual tissue modeling and nanofabrication—could lead to a more predictable way for researchers to engineer tissue outside the body and, consequently, to screen for new drugs. […]

These computational models, or virtual tissue, will form the basis for designing the device for recreating kidney function. The hope is to learn the rules of tissue organization as the team refines the device through testing the computer models and imaging the flow of cell signals within the reassembled tissue from both mouse and .

Bio-medical scientists have wanted to bring the power of computer modeling to research for a long time. It looks like some really substantial results are close at hand.


Full metabolism model. How exciting!

metabolic pathways 1Science published a study today by the Burnham Institute at UC San Diego, The Scripps Institute, and the Novartis Genomics Institute reporting that they have for the first time modeled the central metabolic pathway system of a bacterium, complete with 3-D, atomic resolution overlays of the involved proteins. Exciting, no?

On the Burnham website they say:

Combining biochemical studies, structural genomics and computer modeling, the researchers deciphered the shapes, functions and interactions of 478 proteins that make up T. maritima’s central metabolism. The team also found connections between these proteins and 503 unique metabolites in 562 intracellular and 83 extracellular metabolic reactions.

“We have built an actual three dimensional model of every protein in the central metabolic system,” said Adam Godzik, Ph.D., director of Burnham’s Bioinformatics and Systems Biology program. “We got the whole thing. This is analogous to sequencing an entire genome.”

Here’s a link to a little video on Vimeo about the project.

Developing a solid computer model of any living thing — even a bacterium — is an important step. Computer modeling of airplanes, electronic components, architectural projects and many other things have enabled huge strides in understanding and efficiently designing many things we use today. But the innards of any living thing have been so complex that full modeling has been pretty much beyond reach. What’s exciting about this accomplishment is the prospect of scaling up to model cells, organisms, and even ecosystems. If computer models of cells and organisms prove to be as useful as modeling in other areas has, then this is a bit like putting a rocket engine behind bio-medical research.


Oops! I just went on a spending spree

Hey President Obama, I’m doing my part to restore the economy!

This morning I saw an article on CNET News about a book coming out tomorrow titled Total Recall, by Gordon Bell and Jim Gemmell. They’ve been doing research at Microsoft for years about what it would be like to record everything in your life everyday and keep it in a big database. I tend to use the term “life-caching” for this and it’s a topic I’m interested in. I think it has big implications for areas like Health 2.0 and EHR (electronic health records).

I’m here to confess that Amazon’s “Other people who bought this book also bought…” is a gimmick that works on me. I saw Complexity: A Guided Tour by Melanie Mitchell. I’ve been interested in chaos and complexity — sort of two sides of the same coin — since James Gleick’s Chaos: Making a New Science way back in 1987. I haven’t read anything new in quite awhile, but Complexity has chapters on its application to information theory and biology. I couldn’t resist.

Then I went crazy because the other folks who bought Complexity also bought Wetware: A Computer in Every Living Cell by Dennis Bray. It’s also about biological processes and their relation to information and computation. And finally my buddies who bought Wetware also bought The Machinery of Life by David Goodsell. Naturally, so did I.

Stop me before I spend again!! Well, at least I get free shipping.

Umm, Delicious Bookmarks


RSS The Vortex

  • An error has occurred; the feed is probably down. Try again later.