Ok, folks. DrFuzzy has moved – please visit us over at jbordeaux.com. For good.

grandfather1

How will he learn what Papa knows?

The title for this post is taken from a 1993 RAND report written by two friends and former colleagues.  It is occasionally useful to revisit the first principles when discussing weighty matters such as KM.  Or, as was the case for my friends, U.S. Strategic Forces.

A recent conversation on Twitter involved a fairly innocuous blog posting that discussed briefly the notion of tacit and explicit knowledge.  The problem, for me, is the definition for tacit knowledge in this blog was “that which has not been recorded, written, printed, or otherwise captured in some medium.”  Explicit knowledge, by contrast, has been.  Therefore, the challenge is to make tacit knowledge explicit – because knowledge is only transferred through explicit mediums.  To quote:

Unless converted into explicit knowledge, it cannot be shared because it is ‘trapped’ in one’s mind.

The post also referenced a second gentleman, who posed an even more pithy and awful definitional distinction:

He says that the tacit-explicit distinction is abstract and, in reality, knowledge is ‘either findable by your computer or it is not findable by your computer.’ 

Rather than just letting it go as the Bride often advises, I sent a brief message to the first gentleman, expressing my nonconcurrence with his definitions.  Through the magic of Twitter, this became a conversation enjoined by several souls, and I was finally challenged to provide some primary sources that inform my apparent heartburn.  

In all honesty, while the ensuing discussion may appear “abstract” to some, the nature of knowlege should be at least partially understood if one is to consider themselves a practitioner of knowledge management.  Else, content yourself to the vital and growing field of information management – there is no shame in this whatsoever.

It is important here to note that the original post was intended to briefly acknowledge the academic distinctions, but more to exhort people to share the knowledge trapped in their heads.  I agree with this noble intent, but fear the post does violence to related theory.  Believing that knowledge is only transferred once it has been made explicit leads to mechanistic, engineering approaches to knowledge management that have not proven their worth.  Crank it out of people’s heads, churn it into a shared taxonomy or tag it somehow, and then – and only then – is it useful to others.  I would like to know the exact date that the apprentice learning model was made obsolete by advanced information technology.

While a tidy approach to KM (actually more an approach to information management), the call to “make tacit knowledge explicit” ignores much of what we know about how the world actually works.  To be more precise, we are learning the limitations of what we can know as a result of research across the disciplines of sociology, neuroscience, anthropology, and others.  

Last caveat, I do not have much argument with the practitioners who offered via Twitter that tacit knowledge can be made “partially explicit,” or with the gentleman who offered that the fragmented chatter on Twitter was actually an idea way to begin sharing tacit knowledge.  The promise of social media indeed is that serendipitous connections of people, linked via fragmented information, is a step towards knowledge management that recognizes the fruitlessness of other approaches – including ones that seek to harvest tacit knowledge into explicit knowledge bins.  

Here then, my brief list of “first principles” to understand before drawing conclusions regarding the “implementation” of KM.  If these are true, they should change your view on “making tacit knowledge explicit.”

0. Principle zero: define the terms.  Where did we get this term “tacit knowledge?”  Michael Polanyi described it this way:

Thus to speak a language is to commit ourselves to the double indeterminancy due to our reliance both on its formalism and on our own continued reconsideration of this formalism in its bearing on our experience.  For just as, owing to the ultimately tacit character of all our knowledge, we remain ever unable to say all that we know, so also, in the view of the tacit character of meaning, we can never quite know what is implied in what we say.

While technically true that “not findable on your computer” agrees with this paragraph, I find that characterization falls short of Polanyi’s meaning.

1. We don’t know how we know what we know, or make decisions; and therefore unwittingly misrepresent what we know when asked to describe the process.  Lakoff claims that understanding “takes place in terms of entire domains of experience and not in terms of isolated concepts.”  He shows how these experiences are a product of:

  • Our bodies (perceptual and motor apparatus, mental capacities, emotional makeup, etc.)
  • Our interactions with our physical environment (moving, manipulating objects, eating, etc.)
  • Our interactions with other people within our culture (in terms of social, political, economic, and religious institutions) p.117

Gompert, et al., examined the dual roles of information and intuition in decision-making in their investigation into how to increase “battle wisdom” for U.S. forces.  Asking General Patton how he made the decisions he did will not prepare you to respond similiarly in like circumstances.

Snowden puts it this way:

There is an increasing body of research data which indicates that in the practice of knowledge people use heuristics, past pattern matching and extrapolation to make decisions, coupled with complex blending of ideas and experiences that takes place in nanoseconds. Asked to describe how they made a decision after the event they will tend to provide a more structured process oriented approach which does not match reality.

Medina agrees:

The brain constantly receives new inputs and needs to store some of them in the same head already occupied by previous experiences.  It makes sense of its world by trying to connect new information to previously encountered information, which means that new information routinely resculpts previously existing representations and sends the re-created whole back for new storage.  What does this mean?  Merely that present knowledge can bleed into past memories and become intertwined with them as if they were encountered together. Does that give you only an approximate view of reality? You bet it does. p.130

2. We learn through fragmented input and internal cognitive patterns, embedding extensive context from our environment at the time of learning.  Medina, discussing the work of Nobel Laureate Eric Kandel (2000), relates how the brain rewires itself.

Kandel showed that when people learn something, the wiring in their brain changes.  He demonstrated that acquiring even simple pieces of information involves the physical alteration of the structure of the neurons participating in the process. p.57

Fauconnier and Turner discuss cognition – in part –  in terms of guiding principle for completing patterns, as humans seek to blend new concepts onto what they already know.

Pattern Completion Principle: Other things being equal, complete elements in the blend by using existing integrated patterns as additional inputs.  Other things being equal, use a completing frame that has relations that can be the compressed versions of the important outer-space vital relations between the inputs. p.328

Brown, et al, take on traditional teaching methods in their work showing that “knowledge is situated, being in part a product of the activity, context, and culture in which it is developed and used.”

The activity in which knowledge is developed and deployed, it is now argued, is not separable from or ancillary to learning and cognition. Nor is it neutral. Rather, it is an integral part of what is learned. Situations might be said to co-produce knowledge through activity. Learning and cognition, it is now possible to argue, are fundamentally situated.

The context within which something is learned cannot be reduced to information metadata – it is an integral part of what is learned.

3. We always know more than we can say, and we will always say more than we can write down. For my third principle, I am borrowing directly from Dave Snowden’s extension of Polanyi.  (Snowden’s blog should be at the top of your KM reading list):

 The process of taking things from our heads, to our mouths (speaking it) to our hands (writing it down) involves loss of content and context. It is always less than it could have been as it is increasingly codified.

Having read through the first two principles, it should now be evident that relating what we know via conversation or writing or other means of “making explicit” removes integral context, and therefore content.  Explicit knowledge is simply information – lacking the human context necessary to qualify it as knowledge.  Sharing human knowledge is a misnomer, the most we can do is help others embed inputs as we have done so that they may approach the world as we do based on our experience.  This sharing is done on many levels, in many media, and in contexts as close to the original ones so that the experience can approximate the original.  

The grandfather above will not conduct after-action reviews regarding his fishing experiences, write a pamphlet about fishing, and upload it to the family intranet.  Rather, he will take the boy fishing – where he will show him to tie lures, cast effectively, breathe in the experience, and hopefully learn to love what he loves.   

References:

Brown, J. S., Collins, A., & Duguid, P. (1989). Situated Cognition and the Culture of Learning. Educational Researcher, January-February, 32-42.

Fauconnier, G., & Turner, M. (2002). The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. New York, NY: Basic Books, Perseus Books Group.

Gompert, D. C., Lachow, I., & Perkins, J. (2006). Battle-Wise: Seeking Time-Information Superiority in Networked Warfare. Washington, DC: National Defense University Press.

Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. Chicago, IL: The University of Chicago Press.

Medina, J. (2008). Brain Rules: 12 Principles for Surviving and Thriving at Work, Home, and School. Seattle, WA: Pear Press.

Polanyi, M. (1974). Personal Knowledge: Towards a Post-Critical Philosophy. Chicago, IL: University of Chicago Press.

Snowden, D. J. (2008, October 10). Rendering Knowledge.   Retrieved January 5, 2009, from http://www.cognitive-edge.com/blogs/dave/2008/10/rendering_knowledge.php

My son was leaving after his holiday visit, halfway out the door, when the Bride stopped him.  He had already been asked if he knew how to get back to New York from Northern Virginia by his sister – “Yes, I have GPS.”  The Bride, however, had updated information.  “Don’t trust the GPS to get to the Wilson Bridge, it will tell you to stay to the right, but the exit is on the left now. Read the signs, not the GPS.” 

There had been an eight-year project to redo the “mixing bowl” in Springfield, VA, completed recently.  So recently, in fact, that most GPS systems are not programmed to “know” the new configuration.  This reminds me of the outstanding principle allegedly detailed in a Swedish Army Manual:  If the terrain and the map do not agree, follow the terrain. 

Road signs hold environmental information, we trust them to help us navigate.  Or did.  If you’re using GPS, how attentive are you to road signs anymore?  If you’re in a strange city and the road signs “disagree” with your GPS instructions, what is your choice?  What if you’re driving an Opel Insignia, with front cameras that recognize road signs? 

Personally, I trust my GPS system, even when I notice “she” is taking me on the occasional odd path.  It is easier, for me, than learning my way around an area.  The Bride, however, has a running disagreement with my GPS system – and often asks me what “she” is thinking.  I let the women fight it out most days, although I often find myself in odd conversations defending the GPS system’s behavior.  It passes the time.

I should pause here and note a definitional issue with my logic.  Several friends have pointed out to me that the scale is the thing in cloud computing.  It is not simply offloading cognitive processing to a distant computer or connect to distributed sensors.  It is that we can connect to many computers or potentially all sensors.  The cloud is not the thermometer or my local page on wunderground.com, it is the fact that I can know the temperature for most any point on earth.  I do not disagree, scale is indeed the thing for cloud computing.  However, I’m trying to think through the implications for this scale on our cloud cognition behavior, which predates computers.  

Back to trust.  I recently met with a firm working on second-factor authentication.  Identity-centric computing, how to ensure the cloud trusts the individual is who they say they are.  The information sharing strategy from the Office of the Director of National Intelligence states that individuals need to share information on a network with mechanisms to ensure other users have the appropriate access and know to protect information found there.  In part, this firm is helping answer initiative 2B from the ODNI 500-Day Plan, “Implement Attribute-Based Access and Discovery.”  This firm has an approach that scales massively, and may answer may of the issues for “government 2.0″ applications.

We know to trust the terrain, if it disagrees with the map.  We used to trust road signs, but now often don’t notice them – particularly if we are waiting for our GPS voice to tell us the way.  (For whatever reason, we are so inattentive, that we are now building cars to read the signs for us!)  We trust Amazon with our credit card information and our buying history.  We trust eBay is securing the integrity of its online auctions. We trust Google and Facebook and MySpace with all sorts of personal information, even while we have little understanding of the current and potential use for this trusted information by these companies.  (It’s sometimes useful to think of them as companies rather than websites.)

Trust, we are told, is gained through an expectation of things like authority, reciprocity and care.  However, trust for cloud cognition may be offered on another basis – convenience.  It is simply more convenient to trust than not to.  How many users read End-User License Agreements (EULA)?  Remember the controversy over the EULA for the Google Chrome browser?  We are giving up control and safety for convenience, because we are interested primarily in what works.  We will trust the cloud so long as it does not violate our trust, or so we tell ourselves. We are frogs in the frying pan, dimly aware of the ongoing war to douse the flame before our trusting nature dooms us.  We rush to build authenticating mechanisms for this unstoppable move to the cloud, even as malefactors rush to steal from us by exploiting our trust.

The analogy to international banking systems is irresistible.  We trusted in financial wizards because it worked, and there was no reason not to – except for those who took the time to understand the nature of the underlying “securities.” We already trust much and offload some measure of our lives to the ever-increasing cloud.  We write of ways to increase trust, while the real job is to ensure the cloud earns the trust we have already given it.

Thinking out loud here…

Chat last night on Twitter about cloud computing, the definition having been recently updated on Wikipedia by @bobgourley.  One gentle challenge was offered by @lewisshepherd:  By the simpler definition, a print server would be deemed cloud computing – is that what is meant?  

At one level, it is not altogether useful to have such broad definitions that the reader is unable to move from the definition to understanding what LinkedIn and Amazon Web Services have in common.  However, as a “specialist of the whole,” I was immediately seduced by the simplicity.  If a user can use distant computers to process local jobs, she is working with cloud computing.  (Cloud computering?)

Take this to another level.  In a most excellent book, Natural Born Cyborgs, Andy Clark wrote that we started offloading cognitive processes when we put on wristwatches.  When someone asks you if you have the time, you say yes – because you know you can look at the watch to get the current time. You likely don’t know it without checking, this may be why you’re asked if you “have” the time, rather than if you “know” the time.  

If someone asks for your phone number, you retrieve it from the wonderful wetware behind your eyes. (Some of us of a certain age eventually lose this information, “I don’t know, I never call it!”)

So what is the difference between looking up your phone number in your brain and checking your wristwatch?  Probably the reliance on previously unrelated variables – if the silly watch battery dies, I suddenly don’t know the time.

Somewhere around 1000 B.C., I suspect cave folk knew it was cold by walking outside and seeing the ice form.  Around 1617, the first thermoscopes were used to compare temperature changes.  As a child, I saw mercury thermometers on the house to tell me when it was freezing.  This morning, the Bride checked weather.com to find out our (somewhat) local temperature is 14 degrees F.  At what stage did we offload cognitive processes to “know” the local temperature?

Andy Clark is right, we are already cyborgs to a degree.  We have always involved technology to help us offload cognitive tasks.  As we consider the various definitions for “cloud computing,” it may be useful to consider it as the next logical step in moving from the cave to the hive mind.

What?

Well, beyond technology – we have also used our social connections to better understand our environment.  “Is it cold out there” to “does anyone know any good new restaurants” is  logical progress.  One is shouted to your fellow cave-dweller, the other a question posed using social media.

So cloud cognition is the offloading of cognitive processes, but also the use of distributed sensors to better understand our habitat.  No man is an island, indeed.

“The [U.S. national security] system fails to know what it knows, to make sense of information and trends in order to understand an increasingly complex global environment, to make effective and informed decisions, and to learn over time what works—and what does not work.”

In a blog posted to the FAS Project on Government Secrecy, Stephen Aftergood refers to the Project for National Security Reform (PNSR) – specifically the work conducted by my team, the Knowledge Management Working Group, in the area of classification reform.  Mr. Aftergood raises some important points, and I will try to respond to them here.  

It is important to make clear that I am not speaking on behalf of the Project, but instead clarifying and discussing the analysis my team has already completed. This is my personal blog, and not sponsored or sanctioned by the Project for National Security Reform.

I appreciate the opportunity to discuss our work, as we worked against a compressed timeline and the report would have benefited greatly from additional time and resources.  My team’s sections on knowledge management probably need more explanation than most, and I hope to expand on the ideas we put in that paper soon.  I am hopeful that through conversations such as these I can add detail – but also learn from all of you how to improve our thinking on this important topic.

From the Secrecy News blog:

“’Sharing information across organizational boundaries is difficult… [because] agency cultures discourage information sharing,’ the report states.  But this is a restatement of the problem, not an explanation of it.”

If that were all we stated in our problem statement, Mr. Aftergood would have a more valid case in finding our work shallow.  In addition to his reference regarding impediments to information sharing, however, we also discuss (pp. 331-362):

- Poor interoperability on the classified side

- Overclassification

- The proliferation of the “sensitive but unclassified” designation

- Confusing technical connections with collaboration

- Information systems are missing common data abstraction, protocols, and compatible business logic

- Inability of systems to understand business limitations and context of data

The recommendations we make in the report on this topic are likewise truncated in Mr. Aftergood’s treatment.

“And so the real upshot of the report’s argument is that the classification system cannot be fixed at all, at least not in isolation or on its own existing terms. ..

They vaguely advocate a “common [government-wide] approach for information classification [that] will increase transparency, improve accessibility, and reinforce the overall notion that personnel in the national security system are stewards of the nation’s information, not owners thereof.”

We didn’t intent to be vague, and apologize if the reader is left believing that we believed that the “teams” recommendation was sufficient to resolve classification issues.  In fact, we recommend (p.450) the establishment of an Office for Decision Support within the NSC Executive Secretariat, which would include the functions within ODNI (Special Security Center)  that are currently working to establish a common security classification across the national security system.  We believe the work this office is already doing is valuable, and seek to give it budgetary and enforcement mechanisms to ensure they succeed.  From our recommendations:

“[T]he Special Security Center within the Office of the Director of National Intelligence currently works to establish uniformity and reciprocity across the intelligence community, but this approach should be expanded to include the entire national security system.”

Mr. Aftergood is correct that we believe a systemic approach to resolving the problems of the national security system  is appropriate.  Hence, while we recommend the above for classification issues, we recognize that without the reforms mentioned in the human capital, strategy, and resources sections – the ‘knowledge management’ problems will not be resolved.  

For example, the fact that information security professionals are free to assert controls that hamper information sharing and other business functions remains a problem.

“There is often a tension between information security and operational effectiveness. The latter is enabled by easy access to information and the free flow of information both within and across organizational boundaries. The former often requires tight controls on information access and sharing based on a wide range of parameters (e.g., classification level, organizational affiliation, ‘need to know’ requirements, etc.) in order to minimize risks such as unauthorized access to data, data theft, and data manipulation. Historically, national security organizations have placed more emphasis on information security requirements than on the imperatives of information access and sharing. The result has been a culture of ‘risk avoidance’ that has limited the ability of key people and organizations to work collaboratively.”

I appreciate the discussion and review of our work; which we view as the beginning of a conversation.  My thanks to Mr. Aftergood for engaging with us.

Chain of events: Acquaintance writes email, referencing this blog from APQC.  I respond with a rant, augmented by a couple of acidic twitter messages to release steam.  These rants are posted to my Facebook status line, and results in a brief conversation there with a FB friend – who initially believes I’ve lost my mind.

And now here.  Why here?  I’ve already responded to the acquaintence, and interacted with the FB friend, and overall made my point.  Well, I’m blogging now to establish some measure of permanence to my thoughts.  My apologies then to those two individuals who have already been subjected to my rant.

The APQC blog asked a very reasonable question:  “What’s the Deal with Lessons Learned?”  The author then posits several reasons:

“What is it about capturing and applying lessons learned that so often trips us up and causes us to never get past the “capture” step of the process? Is it that the mistake or error that prompts the lesson is so context-dependent that we think others couldn’t benefit from it and therefore we don’t capture it at all? Or could it be that whatever repository these lessons disappear into is so unorganized that retrieving them in order to apply them is a huge undertaking? Or is it simple communication–in other words, we simply don’t share our lessons learned proactively with those who might benefit from them? Or some combination of the above?”

My answer: E!  None of the above.

My acquaintance works in the Pentagon alongside his command’s “lessons learned” people, and shared that they go in the field, watch exercises, and then let people know where they made they repeated mistakes.  He was asking the same question:  why don’t these programs work as intended?

In organizations where the machinery is larger than the man, where we serve and tend to the machines, where human behavior and decisions are minor aspects of the overall production line – then things like “lessons learned” along with six sigma, Lean, etc., make some sense and have proven results.  The trouble comes when we apply these mechanisms in organizations where the human predominates.

My response is below, slightly edited, but retaining all the snarkiness.  I should add that I was responding in the context of military training and operations.  In most organizations, my opinion is strongly against “lessons learned” programs.  

Regarding lessons learned…  Let’s think about this for a moment.  The underlying presumption regarding “lessons learned” is that what worked before, will work again – and the context around the new situation will not differ enough to make the “lesson” insufficient to the new challenge.  This is arrogant, demonstrably false and dangerous.

First off, when gathering these lessons, we interview people regarding their decisions.  Trouble is, people don’t know how they make decisions.  Not truly, they fill in gaps of reasoning where they actually went with deep intuition.  Finding hard to explain their intuition, they inaccurately weight other decision variables, dutifully captured by the interviewer.  And the lie is born.
Second, context matters.  It actually matters to consider the situation as it lies, and the application of sterile “lessons” that carry a (now lost) context will result in only random chances of success.  Complexity science reveals the teleological realities – you cannot predict events in complex systems; you can set boundaries, establish attractors and modulators and monitor for patterns.  In addition, these systems are highly sensitive to starting conditions (see Lorenz).  Where do “lessons learned” fit against what we know about context-sensitive complex systems?
Fortunately, no one actually uses lessons learned databases to make decisions.  When you are faced with a challenge, do you turn to the ‘lessons learned’ database, or to a trusted friend who may have faced similar challenges?  The latter is likely true, and you update this friend with your current circumstance so that he can match it against his experience – you both then discuss what may be different this time and the limitations of his experience…and then you learn together.
So what should your colleagues be doing?  Collecting “lessons observed” and distilling principles that may be more universal than the specific lessons – but more importantly, they should enhance the connection of professionals.  Consider the success of Companycommander, where Company commanders are able to collaborate and share experiences in near-real time.  Why is this such a success when the Army for years has had the CALL program?
Given this, which should your colleagues be doing?  Mimicking CALL, or CompanyCommand?
Lessons learned programs don’t work because they don’t align with how we think, how we decide, or even an accurate history of what happened.  Other than that – totally worth the investment.

I am seduced by the interest in yesterday’s post, which remains sloppy and in need of tightening.  There are many types I missed, so let me try to flesh this out a bit.  To review, these are observations, not completed analysis.  But through this first pass, we may glean some common characteristics.  To be serious about this, I would need a significant data sample – please do not imagine I have cut through thousands of twitter users to develop these types.

But I’d like to.

To recap, we have:

Incurious Celebrity – 1:60+ Augmenting value provided elsewhere, but not actively listening to Twitter. May respond to @ messages.  Poster girl: @breagrant.  Also a member: @anamariecox (1:122), whose messages are often worth printing off and framing.  Ms. Cox gained some fame by raising a substantial amount of money through Twitter and her blog in order to finish participating as part of the press gaggle for the McCain campaign.  She can rally support, but remains an Incurious Celebrity.

Curious Celebrity – 1:1  Augmenting value provided elsewhere, also engaging and listening to their followers.  May respond to @ messages, but also displays evidence they are proactively engaged. Poster guy @stephenfry (1:1)

Engaged Intellectual – (1:10) truly seeking to engage the people they follow, providing unique value online. Links to items they are reading or writing – and relies on feedback.  Would plotz without it. @cheeky_geeky (1:8) a poster guy here.

Balanced Invisible – (1:1), for small values of 1.  Engaged, but mainly followed by real life friends and Mom.  I’m trying to break out, I really am.  Sigh. 

Empty Suit - marketers, spammers, other folks who believe connecting with zero value is useful for anyone other than themselves.   Yesterday I provided an egregious example, today here’s “coach Judy,” someone whose ratio is (1:1). However, well, this graphic demonstrates an actual feed from a half hour out of her twitter life (the “free gift” is a blog posting).  She may be doing something really valuable to get all those followers, but her use of Twitter makes her an Empty Suit.

Twitter spam

You may think that a form of the Incurious Celebrity would be journalism outlets, such as @nytimes (1:425).  They satisfy the criteria: high ratio of followers to followed, and providing intrinsic value.  However, their twitter messages are a form of “corporate communications,” in that they use Twitter to augment their news delivery.  

Journalists need their own types.  

Here are a few:  @nytimes (1:425) is, sorry, Old Media.  Why?  They use twitter entirely to draw eyeballs to their existing media channel.  Their messages are entirely links to their web page offering.  However, they are offering original content, as they employ actual journalists. Old Media remains a source of valuable information. 

@breakingnewson (1:10), is a Resourceful Repackager.  Their ratio is based on fairly large numbers (969:10,375), and they aren’t just following other news outlets.  However, they are monitoring news through various media channels and pass on breaking news to Twitter. 

@ricksanchezcnn (1:2) is an example of Listening Media.  While primarily appearing on the unblinking eye of CNN, he incorporates edited Twitter streams into his newscast.  More importantly, he uses the Twitter community for “show prep.” This is an important step, rather than treat twitter users (only) as if we’re zoo creatures, Mr. Sanchez is also interacting and listening.

Full disclosure:  I stopped following Rick Sanchez in a snit after he posted a question during show prep one day about the increase in hate speech directed at Barack Obama.  It’s entirely possible I wrote him several messages asking (ok, demanding) him to explain the difference between news and incitement.  He ignored (or likely, didn’t see) the messages, and I unfollowed. I’m still snit-bound.

andersoncooper

Interestingly CNN’s @andersoncooper (1:780), (who violates the ‘cnn’ suffix that is otherwise apparently a station norm), is profoundly Old Media.  Such a young hip guy, but his messages are all pointers to his area on cnn.com, and his ratio is disturbing.  

Not only is he following only seven feeds, but the only human on that list is @jackcafferty (1:265).  Mr. Cafferty, whose job appears to consist entirely of provoking audience engagement through email, is remarkably also Old Media.  The only human on his list is, yes, @andersoncooper.  Someone get these guys telephones.  

Brief rant. Ok, Jack?  “If you didn’t see your email here, go to my blog where they’re all posted.”  So:  write you an email so that I may then go to your “blog” and read it?  Aren’t narcissists usually more resourceful?  

@fox5newsedge (1:1) is truly radical, and may be an example of Trusted Media.  Yes, I did say that out loud.  This is someone of the Listening Media type who also shares non-news insights.  He responds to listeners, their ideas inform his on-air presentations, he provides a “tease” to his broadcasts, and links to content, but uses follow-up (what he calls f/u) ideas to provide a more tailored broadcast. Finally, and most important, he shares his personal twitter account (@brianbolter (1:5)) from here – where I can assure you he is himself.  This local news broadcaster thanked me recently when I provided a cleaning solution idea to fix an unfortunate marriage between a decanter of red wine and his carpet.  

Mr. Bolter is building out a trust network by connecting in a meaningful way with his audience. This is nontrivial; Washington DC is a town where news is often made by people who “leak” information to trusted news sources.  What does it mean for a journalist who’s gaining trust among thousands of Washingtonians with very little effort?  Do I trust Mr. Bolter because he spills red wine, is witty, and is nervous about an upcoming laproscopic procedure?  Yes, because he is connecting on a human level.

This matters.   

I don’t know if this little exercise is useful, but it’s fun to ramble once in a while.

Follow

Get every new post delivered to your Inbox.