Archive for the 'Digital Times' Category


one more chance for humans?

I’ve been following with interest the “Jeopardy!” programs featuring Watson, the IBM supercomputer designed to play the game. On the first program Monday Watson took an early lead over the human “Jeopardy!” champs, Ken Jennings and Brad Rutter, but it seemed stumped when the high-dollar questions were reached, and the game ended in a tie between Watson and Rutter. In the second game Watson ran away with the lead.

So what caught my attention this morning was the headline of the article in International Business Times: “Round Two Goes to Watson; Humans Have One More Chance.” So how did the article’s author, Gabriel Pena, mean “one more chance for humans”? One more chance to win “Jeopardy!” ? Or does it mean something more existentially ominous: one more chance for humans before we’re replace by machines smarter than we are?

I don’t think it’s time to panic. And I’m a real skeptic of ideas like the “singularity”: a time in the not-too-distant-future at which computers become so intelligent and superior at controlling systems in our world that we’re irrelevant.

But my wife’s reaction to the prospect of Watson beating some really smart guys is telling. She asked, “So who’s going to loose their job?” Ah, yes! Haven’t we learned a few things during the so-called “Great Recession”? One of them is that people laid off are not being rehired; companies are investing in “productivity” tools rather than jobs. “Productivity” is a code word for doing the same work with fewer jobs.  If you are not the person running the new productivity devices then your job’s in jeopardy for sure. Implementing higher productivity has been a basic economic process for a long time, but Watson’s technology is enough to make the hair stand up on the back of your neck.

Technological productivity advances are like riding on the back of a tiger: if you stay on its back you’re okay, but if you fall off you’re chow. Watson is a productivity system that appears to have made a real stride in being able to take a English language question and parse it into a specific information request better than than earlier question-answering technologies. IBM calls it “open question answering.” The company is going to turn this into commercial procuct and apply it initially in medicine and medical law. It will help researchers or doctors plow through vast stores of unstructured information like journals to come up with answers to their questions more efficiently than anything before.  As and IBM exec, David McQueeney, said to the Washing Post this morning:

“Imagine taking Watson, and instead of feeding it song lyrics and Shakespeare, imagine feeding it medical papers and theses,” he said. “And put that machine next to an expert human clinician.”

With the computer intelligently processing a vast amount of data and the doctor using his or her own professional knowledge to guide and refine a search, McQueeney said, IBM thinks the quality of that diagnosis could be better than what the doctor comes up with alone.

Looks like doctors will be the first up on the back of this tiger.

I have been a skeptic about artificial intelligence for a long time. I’ve been hearing that AI is right around the corner since the ’50s. Lots of claims have been made but, like the rocket-belt, flying car, nuclear fusion, undersea cities, and the cure for cancer, the expected results haven’t been delivered. Watson is by no means and equivalent to human intelligence, but it appears to be an indicator that progress is being made. We’re not about to be made totally obsolete any time soon — if ever — but the “second chance” for many of us is to stay abreast of this emerging technology and use it as our tool rather than have it put us behind the 8-ball.


A lesson in globalism with a local twist

When I moved up here near Portland several months ago I’d heard that it had what they called the “Silicon Forrest.” To the west of Portland is a nest of hight-tech companies including Nike, HP, Intel, Sun and quite an impressive list of others. And, in the last few weeks there has been a lot of excitement from the announcement by Intel that was going to expand its Hillsborough manufacturing and testing plant. Hillsborough is about 5 mi west of where I live.

In this down economy the announcement of an $8 billion plant to support its cutting-edge 22 nm features for processors is fantastic news. So the latest, greatest Intel CPUs will be made here for several years. It’ll mean somewhere between 6,000 and 8,000 construction jobs over the next two years and about 1,000 permanent technical jobs. Sweet!

A lot of discontent in the US these days is over how: “our jobs have been shipped overseas!” The local Intel plant runs counter to that trend,  so I was somewhat surprised to see an announcement in my Oregonian this morning that Intel’s CEO, Paul Otellini, was in Ho Chi Minh City, Vietnam, yesterday for a ceremony officially opening a $1 billion plant there. And earlier in the week he cut the ribbon on an Intel manufacturing plant in Dalian, China. Gee, do you remember when we were at war with North Vietnam and regularly bombing the crap out of Ho Chi Minh City? I sure do.

But it’s illustrative of how global commerce works. A lot of Americans seem to think that companies with US headquarters and names that are household words somehow have an obligation to be here and help employ Americans. Nah, that’s not how it works. These are not American corporations; they’re global and they go anywhere in the world to get the lowest labor costs and deals on the rest of what it takes to make their products. If you want to make money in the US invest in Intel stock. They don’t see the world and their purpose in nationalistic terms. A lot of Americans need to adjust their thinking to fit with that reality.


another volley in the healthcare revolution

The Washington Post is reporting today that a company called Pathway Genomics on is going to start selling through Walgreen’s 6,000 drugstores an over-the-counter kit for testing certain genetic traits.

Beginning Friday, shoppers in search of toothpaste, deodorant and laxatives at more than 6,000 drugstores across the nation will be able to pick up something new: a test to scan their genes for a propensity for Alzheimer’s disease, breast cancer, diabetes and other ailments

The test also claims to offer a window into the chances of becoming obese, developing psoriasis and going blind. For those thinking of starting a family, it could alert them to their risk of having a baby with cystic fibrosis, Tay-Sachs and other genetic disorders. The test also promises users insights into how caffeine, cholesterol-lowering drugs and blood thinners might affect them.

Yeow, that’s going to set off a firestorm! A couple of years ago when companies like 23andMe began to offer tests to consumers the California and New York public health departments and the FDA tried to shut them down. They issued “cease and desist” orders and threatened to charge them with violating various violations of business practice laws. In fact the kerfuffle has already started.

The Food and Drug Administration questioned Monday whether the test will be sold legally because it does not have the agency’s approval. Critics have said that results will be too vague to provide much useful guidance because so little is known about how to interpret genetic markers.

The medical profession is conservative with good reason: lives are at stake. But in all this, in my opinion, is also a component protection of professional prerogatives. Professions in any field don’t give ground to the ordinary person easily.

I’ve had some experience with this. When I started in the cancer field 36 years ago we had two sets of printed literature: one set for the lay public and another for doctors and nurses. You were risking getting fired if you let a cancer patient get hold of the professional literature! The reasons then were the same ones physicians express now about internet information: “they (the public) won’t understand what it means; they will misinterpret it; they’ll suffer anxiety; they might make bad decisions about treatment.” But the internet irreversibly smashed the barrier to access to professional medical information. Doctors are still fighting a rear-guard action and complaining mightily about how it was better in the old days when they were the exclusive source of medical information. I’ve commented on that before.

I’m not dismissing the concerns. No doubt there will unfortunate incidents around these new tests. But what gets me is how unwilling the medical profession is to see the revolution of information that is underway and to rethink the medical paradigm. My pleas is for physicians to start — as a profession — to work on a more equitable and flexible basis with the citizens who want a greater and more equal role in their medical life. We’ll always have a doctor/patient relationship, but I think its going to be much different in the not far distant future.

The internet isn’t going away; instead it’s going to go much, much deeper into our health lives. And genetic tests are not going away either. Like it or not, deep personal knowledge about what lurks in our genes is on the way. Why isn’t the medical profession working with entrepreneurs, patients, futurists, and internet gurus to anticipate what’s coming and do something positive that works for everyone? There’s much, much work to be done, and soon. Without a collaborative movement of innovation and adaptation we’re going to suffer through repeated, time-wasting bouts of friction.


the paradigm for the genetics of complex diseases is changing

The structure of part of a DNA double helix

Image via Wikipedia

One of the themes of this blog is that living things are complex and making clinical gains from areas of research such as genetics is just plain hard. There’s been a lot of questioning of genetic research lately, but, as I’ve tried to point out, there are many factors other than plain ol’ DNA involved in finding the way genes manifest in disease. That basic situation got a better expectation this past week when two highly respected genetics researchers at the University of Washington, Mary-Claire King and John McClellan, published an essay in Cell titled, “Genetic Heterogeneity in Human Disease.”

For decades the basic genetics paradigm held that common diseases are caused by common variants (CDCV). That is, to look for genetic causes for cancers the reasonable thing would be to identify genetic variations (mutations) found most often in cancer cases. That makes sense, but it turns out that finding these common genetic variations is not enough to explain all the disease. King and McClellan say:

…from the perspective of genetics, we suggest that complex human disease is in fact a large collection of individually rare, even private, conditions…In molecular terms, we suggest that human disease is characterized by marked genetic heterogeneity, far greater than previously appreciated. Converging evidence for a wide range of common diseases indicates that heterogeneity is important at multiple levels of causation: (1) individually rare mutations collectively play a substantial role in causing complex illnesses; (2) the same gene may harbor many (hundreds or even thousands) different rare severe mutations in unrelated affected individuals; (3) the same mutation may lead to different clinical manifestations (phenotypes) in different individuals; and (4) mutations in different genes in the same or related pathways may lead to the same disorder.

There’s a huge idea here: Complex human diseases involve sets of complex genetic variations, so many, in fact, that each person’s case of a disease may have individual characteristics. We accept the idea that each individual is unique, but it’s perhaps surprising to think that your case of cancer, for instance, may bear individual characteristics.

The overall magnitude of human genetic variation, the high rate of de novo mutation, the range of mutational mechanisms that disrupt gene function, and the complexity of biological processes underlying pathophysiology all predict a substantial role for rare severe mutations in complex human disease. Furthermore, these factors explain why efforts to identify meaningful common risk variants are vexed by irreproducible and biologically ambiguous results.

Next-generation sequencing provides its own challenges. Whole-genome sequencing strategies detect hundreds of thousands of rare variants per individual (McKernan et al., 2009). Biological relevance must be established before a mutation can be causally linked to a disorder. The critical question is not whether cases as a group have more rare events than controls; but rather which mutation(s) disrupting a gene is responsible for the illness in the affected person harboring the variant. Variable penetrance, epistasis, epigenetic changes, and gene-environment interactions will complicate these efforts. It will be fun to sort out. [Emphasis mine.]
So, as I’ve remarked before, life is complicated. Living systems are the most complex things we know of in the universe, and we’re only now beginning to explore them in detail. We want results to save us now! But it’s going to be some time before we fully understand diseases like cancer and then a long time ’till effective therapies are widely available. Moreover, we have no idea what it’s all going to cost, and, as our recent rancorous debate on health care demonstrates, cost is no trivial matter.

Health paradigm for the 21st C, part 2

Okay, Part 1 of this post was precipitated by the Society of Participatory Medicine’s request for ideas about what members would like to see them do. I talked about my take on the whys and wherefores of participatory medicine. This post is a list of eight activities I’d like to see supported by the Society for Participatory Medicine:

1. Develop an actionable plan for the goal of enabling each individual to become his or her own primary care authority for 90%-95% of health incidents.

Primary care docs want to go specialist because it pays more, so why not elevate the individual to the primary care provider and boost the physician to the role of spcecialist involved as needed? A few months ago during the health care debate on The Health Care Blog I saw a remark (by a physician, as I recall) that about 80% of health events are handled by the individual: cuts and minor trauma, headaches, colds and flu, aches and pains, nutrition, supplements, upset GI, menstral issuses, and on and on. The “drugstore” often the supply center for first-line of public treatment. What if that percentage could be elevated to with the right tools and support to 90% or 95%?

2. Develop a plan for building well-developed, well-funded information support systems specifically to support lifelong personalized health learning and decision making.

The internet is little more than a platform for informaiton storage and cheap distribution with content kluged together from unrelated sources.  However, people have already adopted the internet as a primary source for health information (Pew Internet Surveys). But so far there is no well-funded health resource base specifically designed to achieve anything like the goal above. The internet is a hodgepodge of sites and information of variable quality. WebMD and other commercial sites provide general content as part of their marketing platfrorm. Wikipedia is one crowd-sourced way to compile informaiton, but its quality has been challenged and the whole enterprise criticized. Medepedia, with content from academics from reputable institutions, arose pretty much to be an authoritative alternative to the noise of internet health information, but it’s primarily a reference work and does not seem to have figured out the public involvement part. There are thousands of nonprofit and government sites with bits and pieces of information, but there is no sign of a national commitment to an architecture designed to empower the public with knowledge in a person-specific or engaging way. The only site I am aware of that seems withing striking distance of the comprehensiveness of necessary is the National Library of Medicine.  Their Mediline and PubMed resources might be a precursor to a more innovative way of supporting personal medicine.

The information from a well-designed and well-networked system should contain a mechanism that helps everyone understand what medical information is “evidence-based” and what the certainty level  of current evidence is. The substantiation of information should be on a dynamic, constantly-updated basis. The system should also help people learn that scientific process works toward greater certainty over time and grey areas with less than 100% proof are a necessary part of understanding medicine.

3. The integrated health knowledge network suggested in iten 2 should take a systems approach to human biology and medicine.

In the 20th century the human organism was disassembled for study by segmentation and reductionism. Specialized areas of medicine, nonprofit organizations, and governmental expert agencies took off in their own directions too. The result is a very fragmented picture of health that still dominates today. Knowledge supporting personal health engagement should put the puzzle of health together. The knowledge base of health and life education should follow guidelines that support clarification of how various sub-systems of the human organism play a part in the function or malfunction of the whole.

4. The approach to participatory medicine should be founded on the principle that learning about health is a lifelong matter.

Information should be communicated and made available on an as-needed or just-in-time basis throughout life but within a cohesive systems framework. As I pointed out in an earlier post, parents are beginning to accumulate and electronically record information about children at birth. With the cost of full genome sequencing plummeting it is likely that the process will eventually become routine at birth. It does not seem out of the question that health knowledge can start at birth with a full family genome and health history as a basis for baseline health assessment and risk estimation.

From the outset, children are curious about their bodies and many teachable moments are possible if appropriate information is provided in a personalized, situation-specific way. A whole range of age-appropriate information should utilize current and future technology to find innovative ways of interfacing health information with many learning opportunities throughout life. Games, avatars, social networks, and virtual environments could be employed to engage various groups. People cannnot and need not become experts in all aspects of medicine, but over time they can become experts about themselves and the health matters that are issues for them as indicated by genomic data, family history, race and cultural variables. Needless to say, a health support information system will need to have as its mission staying abreast of and innovating with emerging technology.

5. Facilitate the evolution of an open sytem of quantifying sensors and devices that measure many aspects of bodily function, health status, fintess, and consumption that can be seamlessly integrated with the knowledge network, EHRs and informed by personalized health models.

The problem with life is that we are born without a “dashboard” for our bodies and with no operating manual. When health problems arise the symptoms such as pain, swelling, and other sensations are often too late to prevent acute illness. And our bodies provide few perceptible clues about the percursors of chronic conditions.

Health 2.0 activity has shown that there are many entrepreneurs eager to supply devices and services related to a personal approach to health. But technology standards committees need to be established or coordinated so that devices and data supporting participation can avoid what has happened in the electronic medical record industry. Interoperability and integration are essential, and the particpatory movement will be inhibited if these characteristics are not incorporated from the outset. Open data standards, open applications, and open media standards are necessary to put together the systems of communication, data recording and transmision, security, and social networking that are sub-systems of the greater vision.

The price of admission for entrepreneurs for participatory medicine should be open standards all around. Consumers should be advised not to support products that cannot be integrated with other components of the greater system (motto: “Homie don’t play ‘dat”). An encouraging development in this regard is the Open Mobile Health Exchange . Nevertheless, ongoing advocacy in needed to keep standards open.

6. Drive a counter-culture movement that encourages the US population to reset its expectations of the market economy from tollerance of the current state of health irresponsibility to one of health-benefit.

The market system in the US is health-indifferent; it is not accountable for focusing on consumer products that are designed to exploit basic cravings regardless on long-term personal or societal health burdens. In fact health corruption and health correction are complementary streams of income. Billions of dollars are spent on the design and marketing of products that contiribute to illness only to be answered by products and services marketed to compensate and bring us back toward health. It’s an amazing wealth engine where the right and left hands wash each other.

The weird thing about health “responsibility” in US society is that, with regard to food and drink, only consumers, not producers of goods, are considered responsible. If we over-consume a product designed and marketed to maximize our consumption, the producer is not held accountable. That’s the way it used to be with tabacco, but we changed the preception of responsibility about tobacco between the 1970s and the end of the last century.

A similar cultural change is needed about food and drink. We have a start;  producers of suggary cereals and high fructose corn syrup drinks have been criticized for marketing them to children. Similar accountability — or at least  social scorn — is necessary for other consumables. Producers have gotten away with saying, “Hey, we don’t force you to drik all that corn syrup. It’s your fault, not ours.” Perhaps as the extreme cost in dollars to US society from obesity and its consequences generates even more pain we’ll be less willing to swallow the denial of culpabiity that the marketplace hides behind.

7. Advocate for the funding developemntof human biological system models that can be personalized so that a constant stream of information may be analyzed and used as a source of near-real-time feedback about our health status and behavior.

We need sophsticated human systems biology and computer health models based to the best scientific information. They should be designed so that health data from our genomes, family history, lifetime health history, and from daily activity can be combined to form a personalized profile or algorithm. Our own model — embodied perhaps as an avatar — could be constantly available to interpret data and give us feedback or status reports. Such personalized models could also set the appropriately personal context for health information and learning.

8. Work to support augmented reality development for an environment that will enable us to get information on-the-fly about what our options are for the things we eat and drink.

Institutional support is needed to creating an augmented reality environment of information for restaurants and markets via databases that support easy access to informaiton about what we’re consuming. Bar codes, wi-fi, Bluetooth,  RFID tags  and new future technology should allow smartphones to immediately obtain information about the nutritional content of meals in restaurants and packaged products in markets. I already use an app called “FoodScanner” that uses the iPhone camera to scan package barcodes, look them up on a remote database, and provide me with the nutrition information food products are required to have on the package. The information can be saved for future use, but the whole process is pretty klutzy. A system that automatically grabs infomation and checks it against a personal profile of stuff to avoid is not hard to imagine.

When I was  in school at ~13-years-old we had “hygiene” class in which we had to learn the parts of the body (“pipes and plumbing,” as it was known) and their functions. Then in high school we boys got movies and slide shows with “the coach” to graphically show how disgusting VD and pregnancy are. That was supposed to deter us from sex until marriage. It was also  all I got from public education about health. I suppose it was somehow supposed to enable me to maintain my health for life.

The steps I outlined above is, I hope, a more robust approach and consistent with technology and lifestyles of the near future. The iGeneration evidently no longer sees a reason to fill their heads with generalized infomation with less that obvious personal applicability. They already know they have the option of getting appropriate information at the time it’s needed. Perhaps they’re already aware that the infomation they’ll be exposed to during their lives will be changing constantly. Making this situation lend itself to a healthier population is going to require many elements working together.

The things I’ve suggested also are simply ideas for a long-term process. If there’s one thing I’ve learned from a career in public health it is that change tends to be a lengthy, nonlinear process requiring tolerance for uncertainty and unexpected developments. Change is a career, not a project.

Reblog this post [with Zemanta]

Turning the corner in nanotechnology

One of the things I like to write about is nanotechnology because — to put it directly — I think it’s going to be the technology that revolutionizes the 21st Century. To suggest it will be the next “industrial revolution” hardly covers it.

Back in 2000 when everybody was prognosticating about the next century I attended a conference put on by The Foresight Institute, an organization that has been pushing nanotechnology since the ’80s. They had a group of venture capitalists who were perhaps the first to invest anything in nanotech talking with an audience of geek enthusiasts and engineers from the Silicon Valley. The VCs were actually very reserved in their forecasts. Perhaps they were just trying to keep the audience from deluging them with proposals for the first billion-dollar nanotech start-up. They cautioned that VCs wanted things that were likely to start returning their investment in five or, at most, ten years. Investment capital is seldom very patient.

One of the really enormous ideas in the field is that nanotechnology will be able to make never-before-seen structures built with atoms placed precisely where they’re wanted. In other words, nano-manufacturing needs some sort of assembler that works in a robotical fashion diligently turning out one nano-widget after another. Imagine something like an auto assembly line where arms reach out to place parts and welds through the endless repetition of robot programs — except on a scale of billions of a meter. To make things that have significance in our macro-world billions and trillions of nano-devices will be needed.

In a recent post to h+— an e-zine that loves far-out, futuristic stuff — there’s a post about recent developments for assemblers.

In a 2009 article in Nature Nanotechnology, Dr. [Nadrian] Seeman shared the results of experiments performed by his lab, along with collaborators at Nanjing University in China, in which scientists built a two-armed nanorobotic device with the ability to place specific atoms and molecules where scientists want them. The device was approximately 150 x 50 x 8 nanometers in size — over a million could fit in a single red blood cell. Using robust error-correction mechanisms, the device can place DNA molecules with 100% accuracy. Earlier trials had yielded only 60-80% accuracy.

What Dr. Seeman is using is DNA origami and structural features of DNA that are used in genetic recombination. Once again — as I described in an earlier post about nano-manufacturing — we are taking lessons from nature’s own original nano-assembler: DNA.


Peering into the generation chasm

In the NYTimes Week in Review this past weekend Brad Stone wrote an article titled: “The Children of Cyberspace: Old Fogies by Their 20s.” He observed that his 2-year-old is already learning touch-screen technologies and Kindle instead of books. His speculation is that “this generation” (i.e., his young daughter) is going to be quite different from kids now in their teens. There are mini-gaps rapidly developing in the way technology affects experience equivalent to the way we used to think about 20-year generation gaps. He quotes Lee Rainie, director of the Pew Research Center’s Internet and American Life Project.

“People two, three or four years apart are having completely different experiences with technology…College students scratch their heads at what their high school siblings are doing, and they scratch their heads at their younger siblings. It has sped up generational differences.”

So that brings me to something that has been on my mind for some time: With the relentless effort to extend human life — and, concomitantly, work-life — how in the  world are current and future generations going to remain relevant enough to have economic value during the whole of their careers?

This is already a big problem. A few months ago I stumbled across a website (that unfortunately I didn’t bookmark) devoted to baby boomers railing about how they are facing age-discrimination. Trailing-edge baby boomers are still in their late forties and need to work for perhaps another 20  to 25 years. But facing both layoffs and re-employment  in 2009 they feel Gen-Xers and Gen-Yers are labeling them as technologically outdated and unfit for today’s jobs. Needless to say the  people participating in the site were very angry about what they perceive they’re facing. Some blame the Mainstream Media for creating false stereotypes of baby boomers as part of some left-wing conspiracy (I don’t know how they reached that conclusion).

I have to say that I think the younger folks have a point. I’m a recently retired baby boomer. I spent a great deal of my personal time and money during my career staying abreast of personal computing and the internet. I can’t say the same for many of my peers. These days  just having “job experience” isn’t enough. To be a strategic leader you’ve not only got to stay abreast of the basic functionality of  technology but also stay in touch the cultural changes that accompany it. Managers with years of traditional experience under their belts need to understand how changing technology can be applied to innovate business operations and to position the organization for future opportunities. Frequently that’s not what “experienced” people do.

It seems to me this problem will only be exacerbated by the accelerating rate of technological change. People in their 20s should not be too smug; they’ll be looking at the issue from the other side soon enough. And even early-career workers need to consider how they’re going to stay mentally flexible enough to absorb new technology and new culture for the several remaining decades of their work-life.


Google calling

telephoneEvidently Google is maneuvering to take on telephony head-to-head. This Wired Epicenter blog post reports on Google’s acquisition of a  company called Gizmo5 that does VOIP things using open standards. So Google continues to toss bombs into the traditional business space of phone companies.

I marvel as the audacity of Google to challenge the established boundaries of the digital world. It doesn’t seem to “know its place” in the order of things. Google knows digits and beyond that it’s willing to re-imagine any niche previously dominated by older technologies and institutions. Those managing the establishment are on notice that nothing in the digital realm is sacred.

Too my mind Google and Apple are a couple of the best engines of change operating worldwide today. Technology inserts itself into the social order in innocuous ways like communication and entertainment. But in the long run it becomes the platform for a new social order.


Joining the iPhone throng

When my cell phone contract came up for renewal last July I was able to convince myself that ponying-up for an iPhone and the $30 monthly data plan made sense. Besides keeping up with the cool kids, I think “smart phones” are the next great pulse in digital evolution. Having a gateway in from the internet and out to whatever data stores you want with you 24/7 is a transition that’s as big as the invention of the PC or the internet itself. IMHO the mobile here and now digital interface is the paradigm that will shape us from now on.

So today Apple — without a lot of fanfare — announced the 2 billionth app download from its 85,000 app App Store to the more than 50 million iPhones and iPods out there. Joining the iPhone herd is not a novel move. Moo!

But the function most resonant with my background and interests is the health aspects of real-time mobile connection. The new phone coincides with my own efforts at a little better health behavior. I’ve already downloaded a number of “apps” to see how they can support my program. I’ll be working my way through the mini-programs in the Healthcare and Fitness category and a few from the Medicine category on the App Store. I’ll post more later on how this is working out for me.


Video at the tipping-point

I’m a believer that the future belongs to video. I assert that the dominant communications medium from here on is the internet; the primary character of that medium will be video; and online communication will, therefore, require video skills.

The data below suggests why I think this is the case. Cisco Systems–a key supplier of hardware for the internet’s infrastructure–has developed what they call the Visual Internet Networking Index to forecast the volume of internet traffic over the next decade or more. Some of their projections are pretty staggering.

  • Total IP traffic for 2012 will amount to more than half a zettabyte… A zettabyte is a trillion gigabytes.
  • Internet video is now approximately one-quarter of all consumer Internet traffic…
  • The sum of all forms of video (TV, VoD, Internet, and P2P) will account for close to 90 percent of consumer traffic by 2012. Internet video alone will account for nearly 50 percent of all consumer Internet traffic in 2012.
  • In 2012, Internet video will be nearly 400 times the U.S. Internet backbone in 2000. It would take well over half a million years to watch all the online video that will cross the network each month in 2012.
  • YouTube is just the beginning. Online video will experience three waves of growth. Even with a six-fold increase between 2007 and 2012, current Internet video growth is in its initial stages. Internet video to the PC screen will soon be exceeded by a second wave arising from the delivery of Internet video to the TV screen. Beyond 2015, a third wave of video traffic will result from video communications.


They have some interesting comments about YouTube too.


“YouTube traffic is both big and small: big enough to impress but not yet big enough to overwhelm service provider networks. It is nothing short of amazing that a site launched at the end of 2005 grew to take up 4 percent of all traffic by the beginning of 2007. By Cisco’s estimates,YouTube accounted for 20 percent ofonline video traffic in North America in 2007, and online video-to-PC amounted to 19 percent of overall North American consumer Internet traffic. […] The success of sites like YouTube and MySpace brings to light the social aspect of video. Entertainment is not the sole purpose of video; in addition to delivering information and providing entertainment, video can serve as a centerpiece for social interaction or as a means of expression.”

Between now and 2020 they see three waves of video growth.

The thing is, these are not just big jumps in transmission volume. They will produce waves of opportunities for innovative ways of communicating, interacting and transacting whatever your business is. It’ll be at least as revolutionary as all the things we’ve seen in the past 10 or 12 years, perhaps more so. The next generation of Googles, Facebooks, etc., will be spawned by this high bandwidth space.

The time to start getting your “vision” together and to start planning for how you’ll be different is now.

Umm, Delicious Bookmarks


RSS The Vortex

  • An error has occurred; the feed is probably down. Try again later.