Ok, folks. DrFuzzy has moved – please visit us over at jbordeaux.com. For good.
January 5, 2009
The title for this post is taken from a 1993 RAND report written by two friends and former colleagues. It is occasionally useful to revisit the first principles when discussing weighty matters such as KM. Or, as was the case for my friends, U.S. Strategic Forces.
A recent conversation on Twitter involved a fairly innocuous blog posting that discussed briefly the notion of tacit and explicit knowledge. The problem, for me, is the definition for tacit knowledge in this blog was “that which has not been recorded, written, printed, or otherwise captured in some medium.” Explicit knowledge, by contrast, has been. Therefore, the challenge is to make tacit knowledge explicit – because knowledge is only transferred through explicit mediums. To quote:
Unless converted into explicit knowledge, it cannot be shared because it is ‘trapped’ in one’s mind.
The post also referenced a second gentleman, who posed an even more pithy and awful definitional distinction:
He says that the tacit-explicit distinction is abstract and, in reality, knowledge is ‘either findable by your computer or it is not findable by your computer.’
Rather than just letting it go as the Bride often advises, I sent a brief message to the first gentleman, expressing my nonconcurrence with his definitions. Through the magic of Twitter, this became a conversation enjoined by several souls, and I was finally challenged to provide some primary sources that inform my apparent heartburn.
In all honesty, while the ensuing discussion may appear “abstract” to some, the nature of knowlege should be at least partially understood if one is to consider themselves a practitioner of knowledge management. Else, content yourself to the vital and growing field of information management – there is no shame in this whatsoever.
It is important here to note that the original post was intended to briefly acknowledge the academic distinctions, but more to exhort people to share the knowledge trapped in their heads. I agree with this noble intent, but fear the post does violence to related theory. Believing that knowledge is only transferred once it has been made explicit leads to mechanistic, engineering approaches to knowledge management that have not proven their worth. Crank it out of people’s heads, churn it into a shared taxonomy or tag it somehow, and then – and only then – is it useful to others. I would like to know the exact date that the apprentice learning model was made obsolete by advanced information technology.
While a tidy approach to KM (actually more an approach to information management), the call to “make tacit knowledge explicit” ignores much of what we know about how the world actually works. To be more precise, we are learning the limitations of what we can know as a result of research across the disciplines of sociology, neuroscience, anthropology, and others.
Last caveat, I do not have much argument with the practitioners who offered via Twitter that tacit knowledge can be made “partially explicit,” or with the gentleman who offered that the fragmented chatter on Twitter was actually an idea way to begin sharing tacit knowledge. The promise of social media indeed is that serendipitous connections of people, linked via fragmented information, is a step towards knowledge management that recognizes the fruitlessness of other approaches – including ones that seek to harvest tacit knowledge into explicit knowledge bins.
Here then, my brief list of “first principles” to understand before drawing conclusions regarding the “implementation” of KM. If these are true, they should change your view on “making tacit knowledge explicit.”
0. Principle zero: define the terms. Where did we get this term “tacit knowledge?” Michael Polanyi described it this way:
Thus to speak a language is to commit ourselves to the double indeterminancy due to our reliance both on its formalism and on our own continued reconsideration of this formalism in its bearing on our experience. For just as, owing to the ultimately tacit character of all our knowledge, we remain ever unable to say all that we know, so also, in the view of the tacit character of meaning, we can never quite know what is implied in what we say.
While technically true that “not findable on your computer” agrees with this paragraph, I find that characterization falls short of Polanyi’s meaning.
1. We don’t know how we know what we know, or make decisions; and therefore unwittingly misrepresent what we know when asked to describe the process. Lakoff claims that understanding “takes place in terms of entire domains of experience and not in terms of isolated concepts.” He shows how these experiences are a product of:
- Our bodies (perceptual and motor apparatus, mental capacities, emotional makeup, etc.)
- Our interactions with our physical environment (moving, manipulating objects, eating, etc.)
- Our interactions with other people within our culture (in terms of social, political, economic, and religious institutions) p.117
Gompert, et al., examined the dual roles of information and intuition in decision-making in their investigation into how to increase “battle wisdom” for U.S. forces. Asking General Patton how he made the decisions he did will not prepare you to respond similiarly in like circumstances.
Snowden puts it this way:
There is an increasing body of research data which indicates that in the practice of knowledge people use heuristics, past pattern matching and extrapolation to make decisions, coupled with complex blending of ideas and experiences that takes place in nanoseconds. Asked to describe how they made a decision after the event they will tend to provide a more structured process oriented approach which does not match reality.
The brain constantly receives new inputs and needs to store some of them in the same head already occupied by previous experiences. It makes sense of its world by trying to connect new information to previously encountered information, which means that new information routinely resculpts previously existing representations and sends the re-created whole back for new storage. What does this mean? Merely that present knowledge can bleed into past memories and become intertwined with them as if they were encountered together. Does that give you only an approximate view of reality? You bet it does. p.130
2. We learn through fragmented input and internal cognitive patterns, embedding extensive context from our environment at the time of learning. Medina, discussing the work of Nobel Laureate Eric Kandel (2000), relates how the brain rewires itself.
Kandel showed that when people learn something, the wiring in their brain changes. He demonstrated that acquiring even simple pieces of information involves the physical alteration of the structure of the neurons participating in the process. p.57
Fauconnier and Turner discuss cognition – in part – in terms of guiding principle for completing patterns, as humans seek to blend new concepts onto what they already know.
Pattern Completion Principle: Other things being equal, complete elements in the blend by using existing integrated patterns as additional inputs. Other things being equal, use a completing frame that has relations that can be the compressed versions of the important outer-space vital relations between the inputs. p.328
Brown, et al, take on traditional teaching methods in their work showing that “knowledge is situated, being in part a product of the activity, context, and culture in which it is developed and used.”
The activity in which knowledge is developed and deployed, it is now argued, is not separable from or ancillary to learning and cognition. Nor is it neutral. Rather, it is an integral part of what is learned. Situations might be said to co-produce knowledge through activity. Learning and cognition, it is now possible to argue, are fundamentally situated.
The context within which something is learned cannot be reduced to information metadata – it is an integral part of what is learned.
3. We always know more than we can say, and we will always say more than we can write down. For my third principle, I am borrowing directly from Dave Snowden’s extension of Polanyi. (Snowden’s blog should be at the top of your KM reading list):
The process of taking things from our heads, to our mouths (speaking it) to our hands (writing it down) involves loss of content and context. It is always less than it could have been as it is increasingly codified.
Having read through the first two principles, it should now be evident that relating what we know via conversation or writing or other means of “making explicit” removes integral context, and therefore content. Explicit knowledge is simply information – lacking the human context necessary to qualify it as knowledge. Sharing human knowledge is a misnomer, the most we can do is help others embed inputs as we have done so that they may approach the world as we do based on our experience. This sharing is done on many levels, in many media, and in contexts as close to the original ones so that the experience can approximate the original.
The grandfather above will not conduct after-action reviews regarding his fishing experiences, write a pamphlet about fishing, and upload it to the family intranet. Rather, he will take the boy fishing – where he will show him to tie lures, cast effectively, breathe in the experience, and hopefully learn to love what he loves.
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated Cognition and the Culture of Learning. Educational Researcher, January-February, 32-42.
Fauconnier, G., & Turner, M. (2002). The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. New York, NY: Basic Books, Perseus Books Group.
Gompert, D. C., Lachow, I., & Perkins, J. (2006). Battle-Wise: Seeking Time-Information Superiority in Networked Warfare. Washington, DC: National Defense University Press.
Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. Chicago, IL: The University of Chicago Press.
Medina, J. (2008). Brain Rules: 12 Principles for Surviving and Thriving at Work, Home, and School. Seattle, WA: Pear Press.
Polanyi, M. (1974). Personal Knowledge: Towards a Post-Critical Philosophy. Chicago, IL: University of Chicago Press.
Snowden, D. J. (2008, October 10). Rendering Knowledge. Retrieved January 5, 2009, from http://www.cognitive-edge.com/blogs/dave/2008/10/rendering_knowledge.php
January 2, 2009
My son was leaving after his holiday visit, halfway out the door, when the Bride stopped him. He had already been asked if he knew how to get back to New York from Northern Virginia by his sister – “Yes, I have GPS.” The Bride, however, had updated information. “Don’t trust the GPS to get to the Wilson Bridge, it will tell you to stay to the right, but the exit is on the left now. Read the signs, not the GPS.”
There had been an eight-year project to redo the “mixing bowl” in Springfield, VA, completed recently. So recently, in fact, that most GPS systems are not programmed to “know” the new configuration. This reminds me of the outstanding principle allegedly detailed in a Swedish Army Manual: If the terrain and the map do not agree, follow the terrain.
Road signs hold environmental information, we trust them to help us navigate. Or did. If you’re using GPS, how attentive are you to road signs anymore? If you’re in a strange city and the road signs “disagree” with your GPS instructions, what is your choice? What if you’re driving an Opel Insignia, with front cameras that recognize road signs?
Personally, I trust my GPS system, even when I notice “she” is taking me on the occasional odd path. It is easier, for me, than learning my way around an area. The Bride, however, has a running disagreement with my GPS system – and often asks me what “she” is thinking. I let the women fight it out most days, although I often find myself in odd conversations defending the GPS system’s behavior. It passes the time.
I should pause here and note a definitional issue with my logic. Several friends have pointed out to me that the scale is the thing in cloud computing. It is not simply offloading cognitive processing to a distant computer or connect to distributed sensors. It is that we can connect to many computers or potentially all sensors. The cloud is not the thermometer or my local page on wunderground.com, it is the fact that I can know the temperature for most any point on earth. I do not disagree, scale is indeed the thing for cloud computing. However, I’m trying to think through the implications for this scale on our cloud cognition behavior, which predates computers.
Back to trust. I recently met with a firm working on second-factor authentication. Identity-centric computing, how to ensure the cloud trusts the individual is who they say they are. The information sharing strategy from the Office of the Director of National Intelligence states that individuals need to share information on a network with mechanisms to ensure other users have the appropriate access and know to protect information found there. In part, this firm is helping answer initiative 2B from the ODNI 500-Day Plan, “Implement Attribute-Based Access and Discovery.” This firm has an approach that scales massively, and may answer may of the issues for “government 2.0” applications.
We know to trust the terrain, if it disagrees with the map. We used to trust road signs, but now often don’t notice them – particularly if we are waiting for our GPS voice to tell us the way. (For whatever reason, we are so inattentive, that we are now building cars to read the signs for us!) We trust Amazon with our credit card information and our buying history. We trust eBay is securing the integrity of its online auctions. We trust Google and Facebook and MySpace with all sorts of personal information, even while we have little understanding of the current and potential use for this trusted information by these companies. (It’s sometimes useful to think of them as companies rather than websites.)
Trust, we are told, is gained through an expectation of things like authority, reciprocity and care. However, trust for cloud cognition may be offered on another basis – convenience. It is simply more convenient to trust than not to. How many users read End-User License Agreements (EULA)? Remember the controversy over the EULA for the Google Chrome browser? We are giving up control and safety for convenience, because we are interested primarily in what works. We will trust the cloud so long as it does not violate our trust, or so we tell ourselves. We are frogs in the frying pan, dimly aware of the ongoing war to douse the flame before our trusting nature dooms us. We rush to build authenticating mechanisms for this unstoppable move to the cloud, even as malefactors rush to steal from us by exploiting our trust.
The analogy to international banking systems is irresistible. We trusted in financial wizards because it worked, and there was no reason not to – except for those who took the time to understand the nature of the underlying “securities.” We already trust much and offload some measure of our lives to the ever-increasing cloud. We write of ways to increase trust, while the real job is to ensure the cloud earns the trust we have already given it.
December 10, 2008
“The [U.S. national security] system fails to know what it knows, to make sense of information and trends in order to understand an increasingly complex global environment, to make effective and informed decisions, and to learn over time what works—and what does not work.”
In a blog posted to the FAS Project on Government Secrecy, Stephen Aftergood refers to the Project for National Security Reform (PNSR) – specifically the work conducted by my team, the Knowledge Management Working Group, in the area of classification reform. Mr. Aftergood raises some important points, and I will try to respond to them here.
It is important to make clear that I am not speaking on behalf of the Project, but instead clarifying and discussing the analysis my team has already completed. This is my personal blog, and not sponsored or sanctioned by the Project for National Security Reform.
I appreciate the opportunity to discuss our work, as we worked against a compressed timeline and the report would have benefited greatly from additional time and resources. My team’s sections on knowledge management probably need more explanation than most, and I hope to expand on the ideas we put in that paper soon. I am hopeful that through conversations such as these I can add detail – but also learn from all of you how to improve our thinking on this important topic.
From the Secrecy News blog:
“’Sharing information across organizational boundaries is difficult… [because] agency cultures discourage information sharing,’ the report states. But this is a restatement of the problem, not an explanation of it.”
If that were all we stated in our problem statement, Mr. Aftergood would have a more valid case in finding our work shallow. In addition to his reference regarding impediments to information sharing, however, we also discuss (pp. 331-362):
– Poor interoperability on the classified side
– The proliferation of the “sensitive but unclassified” designation
– Confusing technical connections with collaboration
– Information systems are missing common data abstraction, protocols, and compatible business logic
– Inability of systems to understand business limitations and context of data
The recommendations we make in the report on this topic are likewise truncated in Mr. Aftergood’s treatment.
“And so the real upshot of the report’s argument is that the classification system cannot be fixed at all, at least not in isolation or on its own existing terms. ..
They vaguely advocate a “common [government-wide] approach for information classification [that] will increase transparency, improve accessibility, and reinforce the overall notion that personnel in the national security system are stewards of the nation’s information, not owners thereof.”
We didn’t intent to be vague, and apologize if the reader is left believing that we believed that the “teams” recommendation was sufficient to resolve classification issues. In fact, we recommend (p.450) the establishment of an Office for Decision Support within the NSC Executive Secretariat, which would include the functions within ODNI (Special Security Center) that are currently working to establish a common security classification across the national security system. We believe the work this office is already doing is valuable, and seek to give it budgetary and enforcement mechanisms to ensure they succeed. From our recommendations:
“[T]he Special Security Center within the Office of the Director of National Intelligence currently works to establish uniformity and reciprocity across the intelligence community, but this approach should be expanded to include the entire national security system.”
Mr. Aftergood is correct that we believe a systemic approach to resolving the problems of the national security system is appropriate. Hence, while we recommend the above for classification issues, we recognize that without the reforms mentioned in the human capital, strategy, and resources sections – the ‘knowledge management’ problems will not be resolved.
For example, the fact that information security professionals are free to assert controls that hamper information sharing and other business functions remains a problem.
“There is often a tension between information security and operational effectiveness. The latter is enabled by easy access to information and the free flow of information both within and across organizational boundaries. The former often requires tight controls on information access and sharing based on a wide range of parameters (e.g., classification level, organizational affiliation, ‘need to know’ requirements, etc.) in order to minimize risks such as unauthorized access to data, data theft, and data manipulation. Historically, national security organizations have placed more emphasis on information security requirements than on the imperatives of information access and sharing. The result has been a culture of ‘risk avoidance’ that has limited the ability of key people and organizations to work collaboratively.”
I appreciate the discussion and review of our work; which we view as the beginning of a conversation. My thanks to Mr. Aftergood for engaging with us.
December 6, 2008
Chain of events: Acquaintance writes email, referencing this blog from APQC. I respond with a rant, augmented by a couple of acidic twitter messages to release steam. These rants are posted to my Facebook status line, and results in a brief conversation there with a FB friend – who initially believes I’ve lost my mind.
And now here. Why here? I’ve already responded to the acquaintence, and interacted with the FB friend, and overall made my point. Well, I’m blogging now to establish some measure of permanence to my thoughts. My apologies then to those two individuals who have already been subjected to my rant.
The APQC blog asked a very reasonable question: “What’s the Deal with Lessons Learned?” The author then posits several reasons:
“What is it about capturing and applying lessons learned that so often trips us up and causes us to never get past the “capture” step of the process? Is it that the mistake or error that prompts the lesson is so context-dependent that we think others couldn’t benefit from it and therefore we don’t capture it at all? Or could it be that whatever repository these lessons disappear into is so unorganized that retrieving them in order to apply them is a huge undertaking? Or is it simple communication–in other words, we simply don’t share our lessons learned proactively with those who might benefit from them? Or some combination of the above?”
My answer: E! None of the above.
My acquaintance works in the Pentagon alongside his command’s “lessons learned” people, and shared that they go in the field, watch exercises, and then let people know where they made they repeated mistakes. He was asking the same question: why don’t these programs work as intended?
In organizations where the machinery is larger than the man, where we serve and tend to the machines, where human behavior and decisions are minor aspects of the overall production line – then things like “lessons learned” along with six sigma, Lean, etc., make some sense and have proven results. The trouble comes when we apply these mechanisms in organizations where the human predominates.
My response is below, slightly edited, but retaining all the snarkiness. I should add that I was responding in the context of military training and operations. In most organizations, my opinion is strongly against “lessons learned” programs.
Regarding lessons learned… Let’s think about this for a moment. The underlying presumption regarding “lessons learned” is that what worked before, will work again – and the context around the new situation will not differ enough to make the “lesson” insufficient to the new challenge. This is arrogant, demonstrably false and dangerous.