Archive for the 'Systems' Category


one more chance for humans?

I’ve been following with interest the “Jeopardy!” programs featuring Watson, the IBM supercomputer designed to play the game. On the first program Monday Watson took an early lead over the human “Jeopardy!” champs, Ken Jennings and Brad Rutter, but it seemed stumped when the high-dollar questions were reached, and the game ended in a tie between Watson and Rutter. In the second game Watson ran away with the lead.

So what caught my attention this morning was the headline of the article in International Business Times: “Round Two Goes to Watson; Humans Have One More Chance.” So how did the article’s author, Gabriel Pena, mean “one more chance for humans”? One more chance to win “Jeopardy!” ? Or does it mean something more existentially ominous: one more chance for humans before we’re replace by machines smarter than we are?

I don’t think it’s time to panic. And I’m a real skeptic of ideas like the “singularity”: a time in the not-too-distant-future at which computers become so intelligent and superior at controlling systems in our world that we’re irrelevant.

But my wife’s reaction to the prospect of Watson beating some really smart guys is telling. She asked, “So who’s going to loose their job?” Ah, yes! Haven’t we learned a few things during the so-called “Great Recession”? One of them is that people laid off are not being rehired; companies are investing in “productivity” tools rather than jobs. “Productivity” is a code word for doing the same work with fewer jobs.  If you are not the person running the new productivity devices then your job’s in jeopardy for sure. Implementing higher productivity has been a basic economic process for a long time, but Watson’s technology is enough to make the hair stand up on the back of your neck.

Technological productivity advances are like riding on the back of a tiger: if you stay on its back you’re okay, but if you fall off you’re chow. Watson is a productivity system that appears to have made a real stride in being able to take a English language question and parse it into a specific information request better than than earlier question-answering technologies. IBM calls it “open question answering.” The company is going to turn this into commercial procuct and apply it initially in medicine and medical law. It will help researchers or doctors plow through vast stores of unstructured information like journals to come up with answers to their questions more efficiently than anything before.  As and IBM exec, David McQueeney, said to the Washing Post this morning:

“Imagine taking Watson, and instead of feeding it song lyrics and Shakespeare, imagine feeding it medical papers and theses,” he said. “And put that machine next to an expert human clinician.”

With the computer intelligently processing a vast amount of data and the doctor using his or her own professional knowledge to guide and refine a search, McQueeney said, IBM thinks the quality of that diagnosis could be better than what the doctor comes up with alone.

Looks like doctors will be the first up on the back of this tiger.

I have been a skeptic about artificial intelligence for a long time. I’ve been hearing that AI is right around the corner since the ’50s. Lots of claims have been made but, like the rocket-belt, flying car, nuclear fusion, undersea cities, and the cure for cancer, the expected results haven’t been delivered. Watson is by no means and equivalent to human intelligence, but it appears to be an indicator that progress is being made. We’re not about to be made totally obsolete any time soon — if ever — but the “second chance” for many of us is to stay abreast of this emerging technology and use it as our tool rather than have it put us behind the 8-ball.


is synthetic life approaching?

By Richard Wheeler (Zephyris) 2007. Lambda rep...
Image via Wikipedia

There has always been a metaphysical aura about life. In addition to the material in a cell or other living thing, most people seem to think that when we say “life” we’re talking about a spark or energy that transcends the material constituents of that living thing.

But suppose that organisms that show all the properties of life can be created by off-the-shelf raw materials of our world and made to function as living through human-designed processes? At no point would some spark or energy be added to jump-start life processes although complex chemical reactions are central to synthesizing the constituent parts. (Is the term Frankenmolecules already taken?)

Researchers are working on just such approaches in an effort to understand the details of how living things get organized, and just recently another step was  taken. Princeton chemist Howard Hecht and his team built proteins from scratch, put them in bacteria, and the bacteria used them to grow and carry on just like the proteins they naturally generate. They demonstrated that there’s nothing mystical or magical about molecules generated in vivo. Actually, there were two artificial steps: they designed artificial DNA that then generated the synthetic proteins.

“What we have here are molecular machines that function quite well within a living organism even though they were designed from scratch and expressed from artificial genes,” said Michael Hecht, a professor of chemistry at Princeton, who led the research. “This tells us that the molecular parts kit for life need not be limited to parts — genes and proteins — that already exist in nature.”

“What I believe is most intriguing about our work is that the information encoded in these artificial genes is completely novel — it does not come from, nor is it significantly related to, information encoded by natural genes, and yet the end result is a living, functional microbe,” said Michael Fisher, a co-author of the paper who earned his Ph.D. at Princeton in 2010 and is now a postdoctoral fellow at the University of California-Berkeley. “It is perhaps analogous to taking a sentence, coming up with brand new words, testing if any of our new words can take the place of any of the original words in the sentence, and finding that in some cases, the sentence retains virtually the same meaning while incorporating brand new words.”

Although millions of proteins from evolved DNA already exist, the ones Nature has produced is only a small fraction of the proteins that could be produced by heretofore unseen DNA and protein combinations. The potential design space is vast. Some people think living things were produced by intelligent design from the beginning, but I think these experiments are getting us closer to the truth. Evolution of the world’s material into living things over a hell of a long time gave us what has gone before, but we’re getting closer and closer to true design of life forms from a huge set of possibilities that will become part of our world in the not-too-distant future.


books that changed my life

My wife and I are moving out of state next month, so we’re unloading stuff we don’t want to transport. I’ve had to look at my book collection and cull the ones I can live without. In the process I realized there’s a small set of books that have framed my way of looking at the world and kindled passions that will continue the rest of my life. They’re the books that have old yellow stickies sprouting from between the pages, yellow highlights and scribbles in the margins. I donated about 60 books to the local library, but these I’ll keep to the end.

Eric Jantsch, The Self-Organizing Universe: Scientific and human implications of the emerging paradigm of evolution, 1979.

Actually, I found this book after reading James Gleick’s, Chaos. Chaos was an unusual best-seller in ~1987 I guess because we all experience “chaos” (in the colloquial sense) in our lives, and people evidently were looking for some insight. A lot of readers never finished the book because it explored the physics and mathematics of chaos, not necessarily the common term. Nevertheless, Chaos made the term “butterfly effect” part of our vernacular. It was a good introduction to chaos theory, but by the end of the book I was wondering: “With chaos being so pervasive in nature, how is it we see order and organization?” Jantsch’s book tackled that conundrum.

Basically, Jantsch presented a framework for how the world organizes via hierarchical systems from the fundamental dynamics of the micro (atomic forces, molecules and basic physical properties) through simple living entities, complex organisms, ecosystems, and social systems. It is a set of concepts that are a theory of organization from basic dynamics up through the most complex things we know, living systems and our own societies. Here’s how Jantsch defines systems:

The notion of system itself is not longer tied to a specific spatial or spatio-temporal structure nor to a changing configuration of particular components, nor to  sets of internal or external relations. Rather, a system now appears as a set of coherent, evolving interactive processes which temporarily manifest in globally stable structures that have nothing to do with the equilibrium and solidity of technological structures.

The mind-blowing idea that came through in this work is that there are processes that, when fed by external energy flows, can become so stable that we think of them as things. Especially in living systems, a lot of things are really just processes that persist as long as the right conditions exist and only that long. They’re called “process structures.” It looks like an oxymoron, but you can perceive some persistent processes as structures. When you get that, it tends to alter your notions of permanence and change. Some complex systems such as living organisms persist during what we call life, but when the sustaining conditions end the processes collapse and it’s all over.

Humberto Maturana and Francisco Varela, The Tree of Knowledge: The biological roots of human understanding, 1987.

The authors of this book set out to show that cognition is not simply our eyeballs and brain somehow internalizing what’s “out there” but is absolutely contingent on our biological structure and processes. Moreover, cognition is a result of our experience and interaction with other people through language. Their notions are pretty trippy. The book’s cover art is a Salvador Dali painting. But the key for me is that they build their argument for how “knowledge” works from the ground up, starting with processes of self-organization at the molecular level. From there they describe how living things come about through a process of  “learning” clear up through humans with our shared knowledge and shared cognition.

Maturana and Varela’s key idea here is autopoiesis, self-organizing systems similar to Jantsch’s ideas.

Our proposition is that living beings are characterized in that, literally, they are continually self-producing. We indicate this process when call the organization that defines them an autopoietic organization. […] The most striking feature of an autopoietic system is that it pulls itself up by its own bootstraps and becomes distinct from its environment through its own dynamics, in such a way that both things are inseparable.

Werner Loewenstein, The Touchstone of Life: Molecular information, cell communication, and the foundations of  life, 1999.

Backing up all the way, Loewenstein goes about explaining the organization that enables the complexity of living things by starting with entropy and information theory. You can’t get more basic than the laws of thermodynamics!

Neither Jantsch’s or Maturana and Varela’s books deal in detail with how information in chemistry figure into their notions of self-organization, but it’s there. Loewenstein makes the idea of information the theme of his book and caries it through from the idea of macromolecules clear up through cells, intracellular information exchanges, inter-cellular communication, and special information structures like neurons. But what I took away from this treatise is that the molecular structures at the cellular level are information devices as surely as the laptop I’m using to write this post. We’re so used to thinking of information in terms of human language and symbols that it seems strange to think that the conformations of proteins, DNA chains, “messenger” RNA and the intricate interactions among them are just as informational. But the robust and growing science of bioinformatics is based on just such ideas.

Dennis Bray, Wetware: A computer in every cell, 2009.

Actually, I’m just finishing this one. It’s a very interesting look at the internal informational working of cells that give these basic units of living things a capability of awareness and appropriate responsiveness that deserves more attention and respect. Cells aren’t just bricks in the wall; they’re participants in some astute biology. Wetware brings together in the cell Loewenstein’s molecular informational processes and Maturana and Varela’s philosophical views of life processes as forms of cognition and learning.

What runs through all these books is the idea that the universe’s fundamental properties and rules allow the emergence of processes of great complexity; complexity sufficient to reach the level of life and at least one organism — us — with the capacity for self-awareness and splendidly subtle thought. That’s a truly amazing range of possibilities based on some very foundational laws. How this is possible is a chain of events that we can only partially explain at this point. The rest of the story requires details we’re only getting a glimpse of right now. It’s certainly a set of riddles that will keep me fascinated the rest of my days.


What’s with the perverse link ‘tween cancer and brain disease?

Two weeks ago I posted a somewhat facetious piece about an epidemiological study that found lower risk of cancer in people diagnosed with Alzheimer’s disease and, inversely, lower risk of Alzheimer’s in those with cancer. That’s not just converse but perverse as well.

Now The Feinstein Institute for Medical Research reports that Katherine Burdick, PhD, looked at the relation between the proto-oncogene (a gene that, when mutated, can contribute to cancer) MET and schizophrenia. There are family data that suggest that having a higher risk for schizophrenia lowers a person’s risk of cancer.

Serious mental illness or debility is a lousy trade-off against cancer. But these are not choices you make; they’re biological outcomes that have a rather extraordinary association. The big issue here is that gene and biologic functions have effects that bridge what appear to be very different roles: brain function and cancer. There are clues.

“The results add to the growing evidence suggesting an intriguing relationship between cancer-related genes and schizophrenia susceptibility,” the scientists wrote.

It remains unclear exactly how the gene actually may increase the risk for schizophrenia while protecting against some forms of cancer. However, evidence from research on MET in autism provides some insight. Specifically, it is known that MET is activated (increased activity) when tumors develop and can increase the chance that cancer cells multiply and infiltrate other tissue.

The activation of MET during normal neurodevelopment is critical to ensure that neurons grow and migrate to position themselves correctly in the human cortex. In autism, it appears that while the brain is developing, reduced MET activity results in structural and functional changes in the brain that may increase a person’s risk for developing the disorder. The Feinstein investigators speculate that the same risk-inducing mechanism may be at play in its link to schizophrenia.


Turning the corner in nanotechnology

One of the things I like to write about is nanotechnology because — to put it directly — I think it’s going to be the technology that revolutionizes the 21st Century. To suggest it will be the next “industrial revolution” hardly covers it.

Back in 2000 when everybody was prognosticating about the next century I attended a conference put on by The Foresight Institute, an organization that has been pushing nanotechnology since the ’80s. They had a group of venture capitalists who were perhaps the first to invest anything in nanotech talking with an audience of geek enthusiasts and engineers from the Silicon Valley. The VCs were actually very reserved in their forecasts. Perhaps they were just trying to keep the audience from deluging them with proposals for the first billion-dollar nanotech start-up. They cautioned that VCs wanted things that were likely to start returning their investment in five or, at most, ten years. Investment capital is seldom very patient.

One of the really enormous ideas in the field is that nanotechnology will be able to make never-before-seen structures built with atoms placed precisely where they’re wanted. In other words, nano-manufacturing needs some sort of assembler that works in a robotical fashion diligently turning out one nano-widget after another. Imagine something like an auto assembly line where arms reach out to place parts and welds through the endless repetition of robot programs — except on a scale of billions of a meter. To make things that have significance in our macro-world billions and trillions of nano-devices will be needed.

In a recent post to h+— an e-zine that loves far-out, futuristic stuff — there’s a post about recent developments for assemblers.

In a 2009 article in Nature Nanotechnology, Dr. [Nadrian] Seeman shared the results of experiments performed by his lab, along with collaborators at Nanjing University in China, in which scientists built a two-armed nanorobotic device with the ability to place specific atoms and molecules where scientists want them. The device was approximately 150 x 50 x 8 nanometers in size — over a million could fit in a single red blood cell. Using robust error-correction mechanisms, the device can place DNA molecules with 100% accuracy. Earlier trials had yielded only 60-80% accuracy.

What Dr. Seeman is using is DNA origami and structural features of DNA that are used in genetic recombination. Once again — as I described in an earlier post about nano-manufacturing — we are taking lessons from nature’s own original nano-assembler: DNA.


Denmark does telemedicine

A week ago I posted some rather snarky remarks about ther resistance to full cooperation by physicians in a study of using telemedicine to augment treatment of people in ICUs.

In contrast the NYTimes did another article this week about a forthcoming report in the Commonwealth Fund about how telemedicine is  being handled in Denmark. They tell about a 77-year-old man who has respiratory problems from smoking.

…he can go to the doctor without leaving home, using some simple medical devices and a notebook computer with a Web camera. He takes his own weekly medical readings, which are sent to his doctor via a Bluetooth connection and automatically logged into an electronic record.

“You see how easy it is for me?” Mr. Danstrup said, sitting at his desk while video chatting with his nurse at Frederiksberg University Hospital, a mile away. “Instead of wasting the day at the hospital?”

He clipped an electronic pulse reader to his finger. It logged his reading and sent it to his doctor. Mr. Danstrup can also look up his personal health record online. His prescriptions are paperless — his doctors enters them electronically, and any pharmacy in the country can pull them up. Any time he wants to get in touch with his primary care doctor, he sends an e-mail message. […]

Several studies, including one to be published later this month by the Commonwealth Fund, conclude that the Danish information system is the most efficient in the world, saving doctors an average of 50 minutes a day in administrative work. And a 2008 report from the Healthcare Information and Management Systems Society estimated that electronic record keeping saved Denmark’s health system as much as $120 million a year.

The most interesting thing about this, however, is that what is making this possible in Denmark and not in the US is difference in attitude. It’s not about technology.

“It was a natural progression for us,” said Otto Larsen, director of the agency that regulates the system. “We believe in taking care of our people, and we had believed this was the right way to go.” […]

Kurt Nielsen, the [Thy-Mors] hospital’s director, says that while the doctors are not particularly adept at information technology, they have gradually embraced it. And it helps that the staff was involved in developing the innovations.

“My staff at the hospital is very, very satisfied,” he said. “We build these systems in an incremental way, and seek their input throughout.”

Everyone would acknowledge that implementing EHR, telemedicine and other technologies in a  population of 6 million is a much smaller financial and procedural challenge. But when your society (i.e., the US and its health industry) seems to lack “we take care of our people” as a top value, the likelihood of putting truly reachable technological systems in place is nearly insurmountable.


Are they trying to drive us crazy?!

An item on Google News this morning is hanging in there presumably because it’s getting a robust number of hits. It’s a science item reported by ABC News and others that suggests that mice engineered to get Alzheimer’s disease showed some slowing of cognitive loss and even improvement of function when exposed to cell phone-level EMR.

In mice prone to an animal form of Alzheimer’s disease, long-term exposure to electromagnetic radiation typical of cell phones slowed and reversed the course of the illness, according to Gary Arendash of the University of South Florida in Tampa and colleagues.

This piece of news is being met with a lot of skepticism. The researchers acknowledge that the findings need to be verified. Still, one credible critic said: “This is nonsense.” Perhaps the best advice comes from Dr. Roger Brumback of Creighton University who said: “extreme caution is necessary until this outcome has been confirmed independently in other laboratories.”

In other words, don’t strap a cell phone to your head to stave-off Alzheimer’s. After all, the debate continues about whether or not cell phone radiation increases the risk of brain tumors. There are some issuing warnings of the risk of brain cancer from cell phones while others maintain the evidence isn’t there.

Now add to this another finding reported a few weeks ago that suggests that people who have cancer have a reduced chance of developing Alzheimer’s while those who have Alzheimer’s have a reduced risk of cancer!

What?! Talk about a good-news, bad-news situation. Gimme a break! The two most feared diseases that afflict us later in life may trade off  in some diabolical sort of way! Well, this too is a statistical association without a clue about possible causal mechanisms. We’re just left to wrestle with a Catch-22 piece of science.

So what’s my point? It’s that more than ever we are being exposed to scraps of scientific information that didn’t reach our attention in decades past. The information ends up on the internet and TV because it’s sensational or maybe because it’s just plain ironic. And we are called upon increasingly to judge what is thrown at us. We can’t always call in an “expert.”

I’ve said several times that cancer and other life-science issue fit well with the title of the Meryl Streep movie now in theaters: “It’s Complicated.” Science is a human endeavor. When we get a righteous fact out of research flashing lights and ringing bells don’t go off like some game show. Trust in what we mortals observe has to be built up over time with confirming observations and applications in the world that work reliably. It’s inconvenient, but that’s the way it is. We have to step up our game for critically evaluating what often seem like spectacular claims, and we should always adopt something of a wait-and-see attitude.


Criticism of genetics research continues

I’ve posted a couple of times about how the big genetics research explosion during the ’00s has produced a backlash of criticism that the practical results of the big bucks spent of the enterprise have not matched the investment. That theme continues with an article in the Times Online yesterday. Another group of  British and American researchers are basically saying the juice isn’t worth the squeezing.

In biomedical scientific research there’s always a basic tension: 1) spend the money on basic biological research to clear up the many remaining mysteries of living things, or 2) spend it on the most direct applied research to get to specific medical benefits sooner. Basic research is often the long way around in science. It takes longer and doesn’t get to treatments as quickly, but when you get there you’ve got a better picture of biological processes which may have unexpected benefits in other medical areas. The quick results route — as directly as possible from A to B — may get there or it may get waylaid because some basic biology piece is missing. Most funders of medical research hedge their bets by spreading research dollars around in both strategies.

It troubles me when I see one group of scientists devaluing the work of another. When I hear “it’s not worth the money” I often interpret that to mean: “Please, please fund MY research!” or “I told you so!”

Science can be a pretty rough-and-tumble business. Because scientists generally have big vocabularies the verbal blows aren’t as crude as you’d hear on, say, the old Jerry Springer show. But the insults are there.

Also, to call for pulling the plug on genetics research at this point tends to violate a basic principle of science: if you don’t get the results you were expecting with some approach, an even more important question comes up: Why? Ever since the Human Genome Project reached its preliminary results in 2000 a bunch of thorny questions have come up. Why are there so few genes (~30,000-40,000) in the human genome, and how do we get our magnificent selves from so few? What’s with all  this “junk” DNA anyway? Why aren’t there a few glaringly prominent genetic clinkers in common diseases like cancer instead of a bunch of genetic flags whose roles are really hard to sort out?

I don’t have the answers, of course, but there are answers to these fundamental questions about our genes, and they’re important. We’re not going to master the frailties that befall us without answers.

As I’ve said before, it’s too bad that nature isn’t simple. If it were, it would be a lot easier to solve problems that vex us. But, whether it’s biology or physics, deep complexity is a basic feature of reality, and we have little choice but to doggedly track  it down. And, isn’t that something that keeps life interesting anyway?


Overcoming big hurdles in cancer research and treatment

I’ve posted a couple of times [here and here] about the complexity of living systems and the challenges that reality presents for medicine. That’s certainly true for cancer.

But the march toward greater information goes on. A Princeton U press release titled “Scientists find way to catalog all that goes wrong in a cancer cell” on describes an advance in algorithms that make defining the pathways of complex genetic interactions. Researchers “were able to systematically categorize and pinpoint the alterations in cancer pathways and to reveal the underlying regulatory code in DNA.”

Researchers Saeed Tavazoie, Hani Goodarzi, and Olivier Elemento say:

“We are discovering that there are many components inside the cell that can get mutated and give rise to cancer…Future cancer therapies have to take into account these specific pathways that have been mutated in individual cancers and treat patients specifically for that.”

The researchers developed an algorithm, a problem-solving computer program that sorts through the behavior of each of 20,000 genes operating in a tumor cell. When genes are turned “on,” they activate or “express” proteins that serve as signals, creating different pathways of action. Cancer cells often act in aberrant ways, and the algorithm can detect these subtle changes and track all of them. […] The algorithm devised by the group scans the DNA sequence of a given cell — its genome — and deciphers which sequences are controlling what pathways and whether any are acting differently from the norm. By deciphering the patterns, the scientists can conjure up the genetic regulatory code that is underlying a particular cancer.

The goal: developing much more specific therapies targeted to these variation.

That’s cool. It’s great to keep digging for the information to understand and treat cancers. But I have to add — having spent many years in the sometimes idealistic realm of cancer control — there’s another giant issue that has reared it’s ugly head in recent years: cost. When I started my career three decades ago nobody was talking about the economic barriers of cancer therapy development and widespread adoption being such a problem. But the ongoing farcical struggle over reasonably equitable health care access for all Americans (much less the rest of the world) demonstrates that how much cancer treatment costs and who pays for it is a formidable problem in “curing cancer” in its own right. Teasing out the information this research suggests is needed and then delivering treatments from it in practical terms to a big population is going to be a long process.


Life is complex. Sorry.

Here’s part two of why there’s a lot of disappointment that the investment by taxpayers and investors in genetics research hasn’t resulted in a lot of  great treatments for diseases. What has been discovered is that even the most “simple” living things are more complex that thought a few years ago. Studies were published in last week’s Science that take the depth of detail about systems in a simple bacterium to a new level of detail. The bottom line: no living thing is simple.

Since you’ve gotta have a subscription to Science to read the articles (and I don’t) we’ll go on the abstracts and what was reported in Wired about them. Teams of researchers studied Mycoplasma pneumoniae, a bacterium that has only one-fifth the genes of the usual study subject, E. coli. In Brandon Keim’s Wired article he says:

The analysis combined information about gene regulation, protein production and cell structure… It’s far closer to a “blueprint” than a mere genome readout, and reveals processes “that are much more subtle and intricate than were previously considered possible in bacteria,” wrote University of Arizona biologists Howard Ochman and Rahul Raghavan in a commentary…In short, there was a lot going on in lowly, supposedly simple M. pneumoniae, and much of it is beyond the grasp of what’s now known about cell function…“Linear mapping of genes to function rarely considers how a cell actually accomplishes the processes,” wrote Ochman and Raghavan. “There is no such thing as a ’simple’ bacterium.”

The scientists suggest that the techniques can be applied to more commplex organisms with beneficial results. Some skeptics might be thinking, “Yeah, we’ve heard that before.”

It would be great if real life biology were simpler. We might have relief by now from things we dread like cancer. But the truth is that living things are incredibly complex and so are the chronic diseases that afflict us. Life science research and research into cancer, in particular, has been like peeling an onion or opening the so-called Russian dolls: inside each layer we’ve found another layer of complexity. It’s daunting, but there’s little we can do but roll up our sleeves and get back to figuring out the next puzzle.

People are going to have to take it upon themselves to do the preventive and health-sustaining things that are well known like healthy diet and exercise to keep their best health for a lifetime. Eventually the depth of biological mysteries will be puzzled out, but it’s going to continue to be a long trip, and it may never be possible to count on external treatments entirely to give us back good health if we’ve let it slide too long.

Umm, Delicious Bookmarks


RSS The Vortex

  • An error has occurred; the feed is probably down. Try again later.