Navigation – Plan du site

AccueilNuméros17-3Ships that Pass in the Night: Tac...

Ships that Pass in the Night: Tacit Knowledge in Psychology and Sociology

Harry Collins et Arthur Reber
p. 135-154

Résumés

Reber et Collins sont l’un et l’autre des chercheurs reconnus, respectivement en psychologie et en sociologie. Tous deux ont pour objet central d’intérêt l’analyse et l’investigation de la connaissance tacite. Pourtant, aucun d’eux n’a lu ou cité le travail de l’autre. Nous nous demandons ici comment cette proximité d’intérêt peut coexister avec cette ignorance. Pendant plusieurs mois, nous avons exploré les différences entre nos visions du monde, nos approches du sujet et les difficultés de l’interdisciplinarité. L’article est un résumé de cet échange, présenté comme une sorte d’étude de cas sur la manière de pratiquer la science. Nous concluons par une liste des propriétés générales que possède le dialogue associé à ce genre d’ « incommensurabilité » et faisons état de notre aversion pour le tribalisme dans la vie académique.

Haut de page

Texte intégral

Introduction

1This piece is different from others in the volume. Arthur Reber (AR) and Harry Collins (HC) have each been working on tacit knowledge for most of their careers taking leading roles in establishing the topic in psychology, sociology and philosophy. Reber published his first paper on the topic in 1967 [Reber 1967] and Collins in 1974 [Collins 1974]. But neither of them has ever cited the other or, until very recently, read the other’s work despite the fact that the words “tacit” and “knowledge” are prominent in the titles of papers and even books that each has written [Collins 2010], [Reber 1993]. When Reber was asked to referee a contribution to this volume, the two of them fell into an email exchange. They discovered that poor scholarship was only part of what was keeping them apart. As the exchange stretched across five months and some 300 contributions, they found the real problem is that they speak different academic languages. In fact they speak different languages even in respect of the central topic of this volume and of much of their academic lives—tacit knowledge. They began to feel that they had stumbled into what seems to be a living instance of paradigm incommensurability [Kuhn 1962], or something close to it. Thus, the following exchange (25 November, 2012):

AR: I think that part of the difficulty I’ve been having reading your writings (especially Tacit and Explicit Knowledge) is that you seem to struggle to say things that hit me with a “well, duh, of course...”. In one email you went on about saluting and in various papers you talk about parsing various kinds of tacit knowledge and different instantiations of the act of riding a bicycle and I keep trying to figure out what you’re trying to convince me of that I don’t already know.
HC: The origins [of the problem] could be that you don’t quite get that I am dealing with knowledge-stuff rather than individual learning.

2But later (31 December, 2012) we still find:

AR: Figuring out how humans do things is what I do. And it is still a strain on my brain to comprehend that it is not what you do.
HC: Whereas, as I keep saying, I am interested in knowledge-stuff, not how humans learn.
AR: I know this when you say this. I understand the words. Then I say to Rhiannon [AR’s professional psychologist wife], “do you know what Harry wrote today?” [...] and I find that I cannot form the words to express this thought of yours because it doesn’t fit in my framework. In my world there is no such thing as “knowledge stuff” independent of the humans holding the knowledge. It feels like saying you’re interested in how pawns move without looking at the game of chess.

3These differences and the way they play themselves out in each discipline seemed worth exploring—so the conversation continued.

4The authors have never met. Perhaps things would have been easier if we had. There were occasions when the dialogue came close to ending because of frustrations and personal misunderstandings that arose with brittle and inflammatory email interchanges. With face-to-face discussion it is possible to transmit more of the tacit! But we believe there is enough here to shed some light on the problems of interdisciplinarity. The purpose of this paper is, then, to explore and, to some extent, explain how such a situation can arise and continue. The focus will, of course, be centred on the example of the analysis of tacit knowledge but inter alia, the analysis may shed some light on academic misunderstandings in general—perhaps helping others who find themselves in a similar position—and it may even shed a little on the relationship between psychology and sociology. Toward the end of the paper there is a section describing the “Seven causes of misunderstanding” (p. 150) that we were able to pull out of our experience. This list may prove useful to those who find themselves faced with similar problems.

1 Different starting points

5Some of the divergence can be explained by the parties’ paradigmatic early experiments and observations in the field of tacit knowledge. Reber asked individuals to memorize sequences of letters that were, though they did not know it, made up using complex rules—an “artificial grammar” (Reber 1967). Over time they became sensitive to these patterns and could differentiate novel well-formed sequences from those that violated the rules even though they were unaware of what they had learned or even that they had learned—it was tacit knowledge. He called the process implicit learning and he contrasted it with other approaches that treated learning as a self-conscious process of hypothesis and test.

6Collins noted that scientists trying to learn from others how to build a new kind of laser—the TEA-laser—failed unless time was spent in a successful laser-builder’s company [Collins 1974]. Even the most detailed written specifications would not enable them to build successfully though they could follow the circuit diagram and use components from the same manufacturers. No one knew what was being transferred in these face-to-face interactions—it was tacit knowledge.

  • 1 The difference between the controlled experimental set up and the untidy and relatively uncontrolle (...)
  • 2 AR: As derived from taciturn.

7Reber’s approach was quintessentially psychological—the artificial lab-based phenomenon being stripped down to its basic essentials in an effort to control all the variables. Collins’s was quintessentially sociological—a messy, natural, social situation. Collins’s initial focus was less on the reasons for failure of transmission and more on overall outcomes and the fact that face-to-face interaction was vital—though he went into more detail in later studies. One can, perhaps, see why for Reber the very meaning of knowledge became associated with the actions of individuals and was generally explored independent of society whereas for Collins its very meaning was to be found in what unfolded in society.1 Thus, for Reber, tacit came to be a synonym for unconscious, implicit. For Collins it was semantically closer to unspoken or unexplicated.2 Making matters worse was that each used explicit as an antonym, though for Reber this meant that the knowledge was made conscious by self-reflection while for Collins it meant that it could be explicated or explained.

8While nowadays there is little reference in the psychological literature to the acquisition of tacit knowledge—the term introduced by Reber, implicit learning, having become the default term—the attention of psychologists tends to be on the acquisition of tacit knowledge rather than the substance. In sociology, particularly Collins’s approach, the effort is to analyze the substance. So we have Reber exploring process while Collins is concerned with content.

2 Ontologies

9The authors are informed by different ontologies. Consciousness is important to Reber and unimportant to Collins. Collins puts a strong emphasis on the social nature of knowledge which is largely ignored by Reber. Reber emphasizes continuity between humans and other species whereas Collins emphasizes difference. Reber believes that consciousness is a primitive property of living organisms. He associates knowledge with consciousness and this means that machines (at least current machines which are not made of living materials) do not have knowledge. Reber is strongly informed by evolutionary theory and sees steady development from one entity into another with consciousness always present but its power evolving.

AR: I’m interested in “human acquisition” of knowledge but it is an interest that is a subset of much else. Human knowledge lies on a continuum with that of other species. I argue that the implicit mode of acquisition is fundamentally the same as various learning processes seen across the phylogenetic scale.

10The two authors discovered that their uses of the terms consciousness and reflectiveness were somewhat confounded. For example, both agreed that cats were conscious but this did no explanatory work for Collins whereas it was important to Reber. Reber also considered that cats might well be reflective from time to time. In his view, a cat always “has consciousness” like a person or a snake or any other living entity—though the form of consciousness differs dramatically from species to species. A cat that is awake is “being conscious”; one that is asleep is “unconscious”. A cat that is acting in automatic, pure bottom-up mode—what it does virtually all the time except perhaps some occasional flickering of reflection—is acting implicitly, without conscious modulation of its behaviour. Thus for Reber, self-conscious attention to activity was the same as being reflective. In implicit learning there was no reflection going on whereas self-conscious rule-based learning was reflective.

11Collins thinks that, in the main, only humans are reflective. Reflectiveness is tied up with the use of language. He agrees that there is a fuzzy borderline between humans and animals occupied by creatures such as chimps and dolphins which may share the rudiments of a language but Collins thinks that creatures on the borderline should be ignored if the argument about the nature of reflection is to be clear. Reber thinks the borderline is critical because he’s far from convinced that it is language per se that is key to reflection. Human language, in his view, emerged along with high-level cognitive functions. It is likely that these cognitive abilities were instrumental in language and in the capacity for self-reflection rather than one causing the other. The jury is out on this but Reber is not comfortable assigning such a vital role to language. He notes that communication is a fundamental feature of many species—this fits with his preference for continuity.

  • 3 In TEK this criterion is what marks out “knowers” from non-knowers (knowers are not the same as ent (...)

12Collins, who is interested in differences, insists on using cats as the animal example so as to get away from the borderline—no one thinks cats are language users. But, as noted, Reber even allows cats to be part of his continuum: they don’t reflect much on their knowledge but they might do so every now and again and they certainly communicate. Collins is drawn in exactly the opposite direction and, to make the point, he now proposes the ‘tacit-to-explicit’ test. A species can only be said to be capable of reflection, according to Collins’s newly invented criterion, if we can imagine some of its members trying to make their tacit knowledge explicit so that they can store it (as they see it), in cave-paintings, in hieroglyphs, and in books, and so that they can broadcast it to others not in the vicinity, and so on.3 As can be seen, Collins’s thought style (see “Seven causes of misunderstanding” p. 150) leads him to draw things apart; Reber’s thought style leads him to draw things together. But in both worlds, it is only humans who endeavour to make their knowledge explicit.

13Collins’s central concept is socialness which is based on human language. Language is not the same as information exchange, so bees and ants are excluded (one can see also that they would fail the tacit-to-explicit test). He believes there is a clear division between entities that are social in this sense and those that are not. Compare Reber:

AR: Ever watch a cat? It moves, turns left, stops, licks its front leg, goes over to a toy, picks it up, tosses it, chases it [...] and so on and so on. [...] Now go back and think how you spent the morning... you got up, scratched, rubbed your tummy, brushed your teeth, made coffee/tea (whatever), walked down the hallway [all unreflectively]. [...] It seems to me that you and the cat (and me) are rather similar...

14For Reber, of course, humans do additional things that involve reflection like decide to send an email, argue a philosophical point and go shopping. In the view of both Reber and Collins, however, virtually every interesting thing that humans do involves a blend of the implicit and the explicit, the unconsciousness and the conscious, the automatic and the reflective. Though, as explained, for Reber, but not for Collins, even the cat might reflect occasionally. For both, the self-conscious reflective things that humans do are different from the things that the cat does most of the time. So Reber sees two kinds of things happening in this scenario: the cat and the human doing similar unreflective things and the human doing reflective things (but in another scenario the cat could be doing reflective things too).

15Collins, however, sees the cat and the human as very different even when they are both doing things unreflectively. In particular, the cat cannot brush its teeth, make coffee/tea or even, in his view, walk down the ‘hallway’—a hallway connotes a great deal to a human while a cat is just walking along an elongated space. Humans can only do things like brush teeth, make caffeinated beverages and walk in halls, however unreflectively they do them, because of the existence of a range of corresponding institutions linked together by language. Nothing the cat does is like this. Cats’ activities are circumscribed by their evolutionary history; different groups of humans are, however, enormously different, the differences emerging from the reflective activities of other humans who are distant in time and space from what is going on now. From Collins’s perspective, all these humans are linked together by a network of common social activity and language. So while both cat and human may be doing things in an unreflective way, most things that humans do depend on a history of reflection by other humans of a kind that the cat has not shared and cannot share since it has no language nor social life in the strong sense of social. Reber has no deep problems with this form of analysis which he views as a viable, if different, way to approach learning, language, communication and social function.

2.1 The tacit

AR: [...] when you use the term “tacit” [...] to distinguish it from “that which can be explicated”, you are flying in the face of a half-century of usage in psychology. [...] When I introduced the term “implicit” (in my MA thesis), I specifically chose it as a contrast against the “explicit” hypothesis-testing process that was being championed by people like Jerry Bruner at Harvard. Bruner and colleagues were developing a theory of knowledge acquisition that was based on the assumption that people learned new stuff by testing explicit (i.e., consciously held) hypothesis about how the world about them functioned [Bruner, Goodnow, & Austin 1956]. If their guesses were confirmed, knowledge became fixed. If disconfirmed, they tried another one. It was an extension of the “hypothetico-deductive” approach to science. Bruner was trying to push it into personal epistemology—a position which you would almost certainly call “idiotic”. I did but for different reasons. My view was that this approach wasn’t wrong in any fundamental way. It was merely grotesquely limited. Yes, in rare circumstances people behave this way but they are few and far between. For example, it made no sense when talking about how an infant learned language or a child became inculcated with the mores of the surrounding social world. And, as you would agree, it made no sense at all when looking at how science was actually done—despite the fact that this is where Bruner began. None of these things were learned consciously, not language, not socialization, not even most scientific knowledge—they were learned unconsciously. [...] In cognitive psychology, making knowledge “explicit” is an act of an individual who is discovering the nature and form of knowledge previously held implicitly.

16Both parties agree that (some) knowledge can be tacit for one person and explicit for another (in Reber’s sense of tacit/implicit), and can be tacit for one person at one time and explicit for the same person at another time. Both agree that it is not uncommon for a single person, who has had some piece of knowledge rendered explicit for him or her, to switch between using that knowledge in an explicit/self-conscious way or an implicit/tacit/unself-conscious way.

17Consider the example of gear-changing in a car. Usually a novice driver will initially be taught to change gears in an explicit way—something like “shift into 2nd when the revs reach 2000”, or “change when the sound of the engine reaches a high pitch”. After a while the rules are forgotten and gear-changing becomes automatic, implicit. On the familiar commute to work we can be thinking about all manner of things while changing gears without being aware that we are doing it. Experienced drivers change gears unconsciously while thinking about something else—but there is nothing to stop them executing the shifts in a more self-conscious manner. For Reber the task changes when focus of attention changes and the Reberian analysis concentrates on the different nature of the task under conscious control versus non-conscious control. For example, non-conscious execution is generally more efficient for normal road conditions whereas self-conscious attention might be better when the road is icy. Collins acknowledges all this but it is not a central feature of his analysis of knowledge.

18For Collins, gear-changing is the same ‘mimeomorphic’ (and therefore explicable) action whether it is currently being executed by a human in a self-conscious way, by a human in an implicit manner, or by an automatic gearbox mimicking the action. What makes it the same kind of action irrespective of how it is carried out at any specific time is that the action can be reproduced without reference to social context. Collins and Kusch called this a mimeomorphic action because it can be reproduced (or mimicked) by merely reproducing the behaviour associated with the action (e.g., as with a salute) [Collins & Kusch 1998]. In contrast, a polimorphic action depends on sensitivity to social context because the same behaviour does not always reproduce the same action (e.g., a greeting which, to be authentic, has to be varied from time to time). The terms mimeomorphic and polimorphic refer to whether or not the externally visible ‘shape’ of the action is merely copied or must change from social instance to social instance. To Collins, the very fact that there can be such a thing as an automatic gearbox is a consequence of gear-changing being a mimeomorphic action rather than polimorphic. It means that, in so far as gear-changing can be imagined to be learned entirely tacitly (and one can imagine such a scenario), it would be a species of Relational or Somatic Tacit Knowledge, not Collective Tacit Knowledge. Reber appreciates this parsing of the domain of tacit knowledge but it does not play a significantrole in his thinking.

2.2 The key distinction

  • 4 Another influence on Collins’s approach to the debate is his later critique of artificial intellige (...)

19The examples of gear-changing and the cat reflect Collins’s interest in “knowledge-stuff” and Reber’s in individual learning and execution of acts. Collins and Reber cut up the world in different ways. For Reber, the essential thing is that actions like gear-changing can be done self-consciously or unselfconsciously depending on circumstances; the topic is the different ways of executing gear-changing. For Collins all gear-changing, however it is actually done, is of the same kind; it is knowledge that can be explicated and (potentially) automated with foreseeable technology as opposed to knowledge that cannot be explicated and automated with foreseeable technology.4 For Reber, the cat and the human, when they are being unreflective, are exhibiting the same kind of knowledge—or at least using similar evolutionarily ancient systems for expressing the knowledge. For Collins, in most instances the knowledge is very different. In Reber’s work, gear-changing can take two forms, conscious and unconscious; in Collins’s work there is but one form—mimeomorphic. For Reber, unreflective cat and unreflective human exhibit one kind of knowledge; for Collins they exhibit two.

20Finally, notice that the data from Reber’s experiments have been simulated by neural net models, suggesting that associationistic models can capture aspects of the tacit dimension of human knowledge. However, in Reber’s view a neural net is just a model of a system that likely exists in brains. Nets don’t have knowledge just like chess playing computers don’t know anything about chess. For Collins, in spite of Reber’s qualifications, the success of computers like neural nets in reproducing the effects shows the restricted range of the tasks that are modelled in the Reber experiments—they concentrate on the mimeomorphic aspects of language.

3 Knowledge and consciousness

21Eventually they began to realize that they were using consciousness in different ways; in some of these the connotations overlapped with knowledge and broader issues of epistemology but, alas, in others they didn’t.

AR: As we move along from simple and primitive organisms to the complex and sophisticated there is a shift from being dominated by the implicit to subsuming the implicit under an increasingly important explicit system. This shift occurs for learning, for memory and for encoding emotional situations. They shift from being utterly implicit to being open to introspective scanning and available for conscious recollection. There is a continuity here and, as we move along the phylogenetic scale, various specialized forms of “knowing” and “acting” and “retrieving” emerge. They do so to fit the demands of particular ecological circumstances. As brains got bigger the role of top-down, modulating functions increased. In humans it reached its pinnacle.

22In the Reber thought-style, knowledge is on a continuum understood to emerge out of evolutionary forces. Therefore, it is Reber’s view that machines cannot know things. They cannot have knowledge because, being made of non-biological materials and, not belonging on the same evolutionary tree as humans, there is no consciousness there and the very notion of knowledge is simply not applicable.

23Collins went back to Reber three or more times to ask the question about how he justified the claim that machines could not have knowledge but cats could, even though both operated unconsciously at least some of the time. He was never convinced by Reber’s answers even though he believed he understood what Reber was saying. Here are examples of the interchange:

AR: “Consciousness” is a feature of particular kinds of organisms. It denotes a continuum of subjectivity. Its most complex (and intellectually seductive) instantiation occurs in humans. “Knowledge” is a body of facts and information that organisms have. Since these organisms have consciousness, their knowledge is linked with this phenomenal state—with the understanding that much of this knowledge is acquired and held tacitly.
You can study knowledge in a disembodied way, of course, just like you can study how a taste-bud responds to sugar but this won’t get at what you experience when you eat chocolate cake.
So I guess the issue here isn’t so much whether these things are ontologically distinguishable but what the goal of the exploration is. I don’t think you’ll learn much about “knowledge” if you dissociate it from the organism doing the “knowing”.
HC: In your world machines cannot have knowledge because they do not have consciousness?
AR: When we say a computer “knows” how to do arithmetic, the “knowing” here is, in my world, very different from when we say a person “knows” how to. If you wish to say that the computer has “machine knowledge” I guess that would be okay but it detracts from the epistemic character we typically assign to “knowledge”.
HC: But a cat can have knowledge because it is conscious?
AR: I think of it in a Tom Nagel’ish fashion: There is something it is like to be a cat. There is nothing it is like to be a computer.
HC: Even though a cat usually uses its knowledge unconsciously.
AR: Yes, but so do humans. Much (most?) of the time we’re on automatic pilot [...] I spent this morning just like a cat.

24We have already seen that Collins disagrees with this last claim but he also just does not understand the role of consciousness. He does not understand the confidence with which subjective understanding is readily imputed to cats and readily denied to machines. This confidence seems to rest solely on the theory that only living material is conscious. So Reber’s world view is consistent but it seems to Collins that the position is not and cannot be established by reference to evidence. Reber, of course, disagrees, feeling that there is substantial empirical support for his position. It is a characteristic of incommensurability that, where one person sees evidence, another person does not. Here, from Collins’s viewpoint there is no evidence for Reber’s position, while from Reber’s viewpoint there is more evidence for his position than there is for Collins’s. Each party believes that the other is basing its argument on something less than adequate. Collins thinks Reber does not care about evidence, whereas Reber thinks what Collins counts as evidence is no more substantial than what he bases his own argument on. The interchange, then, can be said to be characterised by “mismatched explanatory adequacy” (see “Seven causes of misunderstanding” p. 150) even though each party believes the other is making a mistake in the way it views the opponent’s position.

  • 5 Collins found Nagel’s article on being a bat disappointing [Nagel 1974]. Reber liked it.

25This tension reiterates the central discontinuities at the heart of both approaches. For Reber, he really was acting like a cat when he got up because what constitutes Reber’s universe is how creatures attend to what they are doing and he and the cat attend (or not) in the same way much of the time; furthermore, they are both conscious. For Collins, the knowledge of the cat and the knowledge of the human are of completely different types and while the human can act like an animal (e.g., scratching) the cat can never act like even an unreflective human engaged in acts that depend on language. Furthermore, Collins does not understand the explanatory status of claims like a cat has knowledge because “there is something it is like to be a cat” whereas a computer does not because there is nothing it is like to be a computer.5 Reber notes that cats and people howl and jump if you stick them with a pin. A computer makes no such response; it merely loses a couple of bytes. In the former we have subjectivity and phenomenal experience, in the latter we have neither. Collins understands what Reber is saying but does not see it as evidence for consciousness being a correlate of knowledge.

26Reber feels no need to define knowledge beyond what can be found in the dictionary but Collins has to define knowledge-stuff. He inclines toward what he thinks of as a Wittgensteinian meaning for knowledge, namely, that to understand the meaning of words one must understand their use. To know the meaning of a word, then, is to be able to use it as it is used in society. This approach fits, as it happens, with the philosophy of the Turing Test where intelligence is demonstrated by a performance that is indistinguishable from that of an entity that is known to be intelligent—in the Turing Test it is use that is the criterion.

27Collins, then, thinks of knowledge as the stuff you have when you can do certain things. If TEA-laser builders hung around with successful laser builders something passed to them that was still in them on their journey home—and which they could then use to build a successful laser when they arrived. There was no reason to think that their knowledge was affected by what it felt like to be a laser-builder and the readily available criterion of having knowledge was being able to build a working laser. For Collins, it follows that we can imagine a machine building a laser and this makes machines candidates for the possession of knowledge and it means that work is required to show if and why they would be different from human laser-builders. There is nothing in the definition of machines or substance of machines that prevents them being such candidates. Reber, from his evolutionary stance, demurs. The “Nagelish” point isn’t that the notion that “there is something it is like to be laser-builder” necessarily affects knowledge. It is a mental state whose causal roles need to be determined. Collins thinks he understands what Reber is saying about consciousness but does not see what it has to do with knowledge.

28Collins argues, however, that existing and foreseeable machines cannot have full human-like knowledge because they do not share human social life. This argument is testable. It is, for example, why all current and foreseeable machines fail properly conducted Turing Tests. The failure is visible from the outside without reference to internal states.

29Naturally, we had a long interchange about the Chinese Room which may illustrate “focus blindness” (see “Seven causes of misunderstanding” p. 150) or at least the difference between our projects. Searle’s Chinese Room, it will be recalled, shows that a performance equivalent to that of a conscious human does not prove consciousness [Searle 1980]. But this makes no difference to Collins as he is only interested in performance, not consciousness. If the Chinese Room worked in a linguistically perfect way as advertised, however, it would disprove Collins’s view that no foreseeable computer can act as a socially embedded being. But Collins believes the Chinese Room would not and could not work as Searle describes it:

HC: [...] because language is the property of the embedding society, is continually changing, and there is no mechanism linking the Chinese Room to the changing society. Over time it would start to perform archaically (just like a human isolated from society). If a mechanism is put in place to link it to society (e.g., the database is updated by humans) then the ‘socialness’ will be located in that mechanism, not the Chinese Room.
AR: I have difficulty with this argument. It seems to me pretty straightforward that the Room is not conscious and, as with any artificial entity, there is nothing it is like to be the Chinese Room. In so far as it needs to be social, the social aspects of language are embedded from the outset. Whatever information the Room has at the beginning of the test was put there by programmers with social knowledge. So why are they excluded from adding information over time? The humans it’s being compared with have input. And if, as you say, this added socialness is then not “in” the mechanism, why is it not “in” it at the very beginning?
HC: Yes, the social is in there when the Room is first set up but it is a frozen snapshot. If the social is updated by human attendants it is the attendants that are making the connection between the Room and society—that is where the social gets in—the link via the human attendants. That is the point! What we do not know how to do is to automate that process. That, in my view, is the key to making computers handle language like humans handle it.

30We leave this exchange uncommented on as a classic instance of academic misunderstanding.

4 Conclusion on consciousness

31There seemed to be only one place where we wanted to say to each other “you are wrong in a really serious way and I am right”. Reber thinks Collins will never understand knowledge if he does not take consciousness into account as a central feature. Of course, Collins thinks no one understands knowledge if they do not understand socialness but, more importantly, he thinks that Reber’s division of the world into conscious and non-conscious, and therefore knowledge-possessing and non-knowledge-possessing things is arbitrary and explanatorily inadequate. Plants are out, but every other thing made of living matter is in; everything not made of living matter is out. It seems to Collins that, here, constructing a consistent world view has taken priority over the desire to develop a theory with observable consequences.

32Reber maintains that consciousness does have observable consequences. One can see the difference when humans move from implicit to self-conscious execution of a task. In fact, a variable often manipulated in experiments on implicit learning is awareness. It turns out to be a critical component in recruiting different kinds of cognitive processes and impacts on how individuals engage in remembering material, making decisions, forming preferences and making aesthetic judgements [Reber 1993], [Zizak & Reber 2004].

33Collins finds that this reply does not bear on his question and sees this difference as being no more significant than using two different computer programs to execute the same task. The question for Collins is: Are there things that humans can do in virtue of their consciousness that machines cannot do in virtue of their lack of consciousness? Collins cannot think of anything that lack of consciousness in this sense, rules out (whereas he can list many things that machines cannot do in virtue of their lack of socialness). Reber thinks that Collins’s question is not a good one, or at least not useful, and he fails to grasp how Collins deals with things like aesthetic judgements which require consciousness and cannot be made by machines. Collins responds that if machines had socialness they would be able to make aesthetic judgements—which are quintessentially social.

34What divides the two parties here might be explained with a philosophical analogy using the old philosophical puzzle, “If a tree falls in the forest and there is no one around, does it make a sound?” One possible answer is “yes” and “no” for it depends on your initial stance. To a physicist the tree creates waves in the air called “sound waves” whether or not anyone hears them. To a psychologist, for there to be sound, there must be someone who has the subjective experience which we call “sound”.

35In the same way one might ask: “If there were two machines having a human-like conversation in a room and there is no one about to join in, is there knowledge being exchanged?” Collins, from his “knowledge-stuff” perspective, answers “yes” because, presumably, the machines are learning to do new things from each other. Reber, from his subjectivist point of view, says “no”. Reber acknowledges that there is this stuff called “knowledge” in the room but claims it has nothing to do with humans knowing things any more than the physicist’s description of the sound of a tree falling tells us anything about whatit sounds like.

5 Evolution

36Reber argues that evolutionary theory is the key to understanding human knowledge. In Collins’s view, evolutionary theory is of very little help; the most remarkable characteristic of humans is how different from each other groups of humans have become since significant evolution ceased. This point of departure is, perhaps, typical of tensions between psychologists and sociologists. Interestingly, however, Collins’s position is not so far from Reber’s in one of its aspects as this extract from the interchange illustrates:

HC: In TEK I say that tacit knowledge is unexceptionable because that is what animals and other living things have done since they emerged from the slime and it is explicit knowledge that is extraordinary.
AR: I could have written this last sentence myself...

37So far so good, but then things started to diverge. Reber believes that human knowledge evolved and that consciousness came with it. As Reber points out, this has some interesting consequences for our expectations of the relative role of implicit and conscious handling of knowledge: the early implicit elements of our abilities (shared with animals, of course) should be relatively robust and free from cultural variation.

38This cuts right across what interests Collins. Almost the whole of Collins’s academic life has been devoted to showing that what seems quintessentially to be the product of conscious processes (much of what happens in science) is deeply tacit (as in the example of the TEA-laser). Furthermore, language, which comes very late on the evolutionary scene and is very much associated with consciousness, is, for Collins, largely tacit in use. Collins, then, finds Reber’s use of evolutionary theory and consciousness orthogonal to what he wants to say about the relationship between language and the tacit.

39There is, however, something more positive to be said. In TEK Collins continually states that he does not understand the explicit; he does not understand how strings carry meaning. For example, he points out that the icon for house is nothing like a house so it is very hard to say what it is that is upside down about the upside down version. In the face of these difficulties Collins adopts the term affordance which he calls a “conceptual bandage” since he has no concepts that would allow him to explain why an icon or anything else has any affordance in the first place. TEK works by addressing an easier problem which is how some string that has insufficient affordance to take part in an act of communication can be enhanced (for example, by making it longer or more elaborate).

  • 6 In one exchange, Reber noted that there is research in perceptual and cognitive psychology that coo (...)

40But there is a big gap in Collins’s theory. How can any string have any affordance in the first place? Were Collins ever to want to fill this gap he suspects he might well be forced to adopt something like Reber’s evolutionary approach. He would have to say that animals, as they emerged from the slime, evolved to extract certain basic shapes from the environment (such as the upright triangle representing the roof in the house icon). That would be a way, and the only way he can think of, to explain why we can see that the upside-down icon is upside down. Collins, trying for a moment to adopt Reber’s perspective, would predict that upright triangles, since they must have become recognisable as distinctive entities so early on in human evolution, should be recognised across all cultures. There must be, he would argue, a substrate of fundamental shapes, patterns and perhaps colours and textures that all humans recognise and upon which the variations in culture are built.6 Such influence as this underlying structure has is clearly “bottom-up”.

41The trouble, for Collins, is that much of cultural variation is “top-down”. Humans are capable of seeing almost anything as anything else. The question is going to be how the top-down variation of actual and potential human culture is related to the bottom-up structural vocabulary and how bottom-up explanations can bear on top-down explanations. All that has been accomplished, then, is to see the point of evolutionary theory and acknowledge the gaps in Collins’s theory in a more self-conscious way. But this has to be better than mutual incomprehension and talking at cross-purposes.

6 Overall conclusion

42The question we started with is how, for nearly half a century, two academics could work on the analysis and exploration of tacit knowledge in their own disciplines and find no deep need to refer to each other’s work. The dialogue and its analysis have shown, we believe, how it can be—they start with different methods, fit into different world views, have different explanatory goals and use different language to talk about them. In the main, these are just differences and, in another universe, either party would be happy to have accomplished what the other has done.

43Nevertheless, one can see why the ships pass in the night. In TEK, Collins claims to have constructed a map in which the many approaches to tacit knowledge listed can be related. Staggeringly, Collins finds that he did not even include Reber’s work in his inventory of approaches to tacit knowledge and now he finds that it will not fit on the map, certainly not easily. The map was based on the ways that the different approaches dealt with Collins’s three kinds of tacit knowledge but the conscious/implicit individual works with a different substance. Reber works with the forms of human attendance to tasks—that is the stuff of his world; Collins deals with the nature of the tasks irrespective of how they are attended to from moment to moment and that is what the map portrays.

44There are other differences in the two approaches to the world that are not so marked as the consciousness business but where Collins and Reber had difficulty talking to each other. One is how language is acquired: Reber has argued that the mechanism of implicit learning allows infants to pick up the patterns of spoken signals and notes that there is a considerable literature supporting this notion [Reber 1993, 2011]. Collins believes we simply do not know how humans pick up the cues from social life that are required for fluency and, a fortiori, we have no idea how to build such a capacity into a computer. Reber thinks that the implied link between knowing a good deal about language learning and building such knowledge into a computerisn’t warranted.

45Another difference turned on the rather unusual perspective on scientific truth of the sociology of scientific knowledge as it manifested itself in a discussion of parapsychology—which began as a sideline issue but quickly became the focus of an unusual kind of mismatch. Reber has a robust view of parapsychology—it is nonsense. Collins refuses to accept that as a useful professional attitude and spent a long time trying to convince Reber that the sociologist of scientific knowledge cannot begin by believing that one kind of scientific view is nonsense if he or she is to investigate the social forces that lead to it being widely seen as nonsense. Reber agrees that studying how scientific nonsense becomes recognized as nonsense is a useful endeavor but it is still nonsense. Collins thinks that the sociology of knowledge way of thinking takes years of practice before it becomes natural and thinks that the difficulty of this aspect of the exchange arose out of the impossibility of conveying the approach through a few emails. Some of the main disagreement may have been flavoured by Collins’s history in sociology of scientific knowledge—anesoteric position.

Seven causes of misunderstanding

46Toward the end of the exchange we were able to identify seven systematic causes of misunderstanding.

47Mismatched thought styles: Reber generally sees continuities whereas Collins is drawn to bringing out sharp differences and classifications. Reber focuses on individuals, Collins on collectivities.

  • 7 Collins recalls another instance when philosopher Martin Kusch and he spent months talking across e (...)

48Semantic mismatch: The parties often use the same word but with different meanings without realizing it. It applies even to words at the very centre of the discussion—as we mentioned earlier, for Reber tacit is a synonym of “unconscious”, for Collins, “unspoken” or “unsaid”. Reber feels he is being loyal to Polanyi. Collins notes in TEK that he is deliberately going beyond what Polanyi intended. In some instances we were agreeing on the nature of tacit knowledge; in others disagreeing—and were often bewildered by the incoherencies that emerged. This kind of disconnect is dangerous because the terms are so familiar that it is hard to imagine they might mean something different to the other party.7

49Mismatched explanatory adequacy: As Collins sees it, the parties justify claims in different ways. We saw this in a discussion of whether machines can be classified as potentially knowledgeable or non-knowledgeable. According to Collins, Reber works by building a consistent world view based on consciousness as inherently subjective and embedded in evolutionary theory. Collins claims there must be observable consequences and Reber does not have them. Reber thinks Collins is wrong and that he does have evidence for his position that is as strong or stronger than the evidence for Collins’s position. But the notion of mismatched explanatory adequacy remains useful even if, in this case, both parties are wrong about what the other is trying to do. There are many cases where parties disagree about what kind of grounding is needed to establish a scientific result.

  • 8 The term “mismatched saliences” (used for the negative version) is drawn from Collins’s discussion (...)

50Mismatched saliences: Negative mismatched saliences occur because to remedy an information deficit one needs an inventory of what is in the other party’s head so one can see what is missing: such inventories do not exist. Positive mismatched saliences affected the current exchange because one party continually explained at length what the other party already knew. For example, Reber explained the psychological equivalent of Dreyfus’s five-stage theory of expertise several times because Collins’s ignorance of it seemed, to Reber, the only way to make sense of aspects of Collins’s position. Positive mismatched saliences are very frustrating as they stop debates moving forward but they would probably be less marked in face-to-face conversation.8

51Focus blindness: It is sometimes impossible to see a contribution that lies in the peripheral field of a strongly focussed gaze. On occasions one of us thought he had asked a certain question but the other did not see the question because it was outside his view of the scope of their project. The exchange would continue on the assumption that the other had seen and appreciated the contribution. Confusion followed.

  • 9 Locke said, you do not own something unless you mix your labour with it and this applies to concept (...)

52Reversion: Often one of us would explain an effect X to the other whose response made it evident that he understood. But the understanding was temporary. The problem was that X was held together in the longer term by a semantic net which included W, Y, Z, etc., and the whole structure only maintained its integrity through continual use. These W’s, X’s, Y’s and Z’s are like the spinning plates in a juggler’s act—if they are not kept spinning they fall. The dialogue, which was continued by one of the parties as though X is still in play, reverts to the earlier state of mutual incomprehension.9

53Misplaced engagement: Often, to explore and explain two cross-cutting views of the world, one needs to be distanced from them. But because we were engaged in the worlds we were trying to explore, it was almost impossible for us not to slip, every now and again, into trying to convince the other that they were wrong—the traces are still in the text. Where possible the argument should come only after the mutual exploration.

Coda

54The thing about incommensurables, at least in Kuhn’s classic view, is that they are, well, incommensurable. Perhaps not surprisingly, at the outset we disagreed on this issue as well. Collins, having explored it at length [Collins & Pinch 1982], [Collins, Evans, & Gorman 2007], felt that situations like this occurred often. Reber, armed only with a snifter of cognac, felt that dedicated scientists should be able to bridge whatever conceptual gaps divided them through careful reading, dispassionate reflection and intellectual empathy—accompanied by tolerance and trust. He’s not so sure now. We found tolerance and trust and we have tried our hardest to be clear, but there are still aspects of the way the other thinks that seem strange. We’ve ended up in a unique place: we have agreed to disagree even though, in more than a few cases, we aren’t completely sure what we’re disagreeing about or why. But we have learned something that we think is important. We have both been focused on the same end point throughout our intellectual lives, understanding the human mind, the cultures we have constructed around us and the manifold ways in which we function as individuals and as collectives. We understand there is no one best way to get there. In fact, we both suspect that it is from the conflicts, the misunderstandings, the incommensurables that we have a decent chance at real progress—so long as the parties are willing to respect and acknowledge the legitimacy of the other. We are now even more disdainful of gangs of academics whose identity and self-esteem is tied up with cleaving to one intellectual position and scorning those of others. We have come to see that all intellectual positions are likely to have something valuable to offer even if you cannot fully understand them and that the way forward is often a mixture. We also realize that we both understand a lot more than we did before we began this exchange.

Haut de page

Bibliographie

Biederman, Irving
1987 Recognition-by-components: A theory of human image understanding, Psychological Review, 94, 115–147.

Bruner, Jerome S., Goodnow, Jacqueline & Austin, George
1956 A Study of Thinking, New York: Wiley.

Collins, Harry
1974 The TEA set: Tacit knowledge and scientific networks, Science Studies, 4, 165–186.
2010 Tacit and Explicit Knowledge, Chicago: University of Chicago Press, [TEK].

Collins, Harry, Evans, Robert & Gorman, Mike
2007 Trading zones and interactional expertise, Studies in History and Philosophy of Science, 4, 657–666, special issue, edited by Collins, H.

Collins, Harry & Kusch, Martin
1998 The Shape of Actions: What Humans and Machines Can Do, Cambridge, MA: MIT Press.

Collins, Harry & Pinch, Trevor
1982 Frames of Meaning: The Social Construction of Extraordinary Science, Henley-on-Thames: Routledge & Kegan Paul.

Kuhn, Thomas S.
1962 The Structure of Scientific Revolutions, Chicago: University of Chicago Press.

Nagel, Thomas
1974 What is it like to be a bat?, The Philosophical Review, 83, 435–450.

Reber, Arthur S.
1967 Implicit learning of artificial grammars, Journal of Verbal Learning and Verbal Behavior, 6, 855–863.
1993 Implicit Learning and Tacit Knowledge: An Essay on the Cognitive Unconscious, New York: Oxford University Press.
2011 An epitaph for grammar, in Implicit and Explicit Language Learning, edited by Sanz, C. & Loew, R. P., Washington: Georgetown University Press, 23–34.

Searle, John
1980 Minds, brains and programs, Behavioural and Brain Sciences, 3, 417–424.

Zizak, Diane M. & Reber, Arthur S.
2004 The structural mere exposure effect: The dual role of familiarity, Consciousness and Cognition, 3, 336–362.

Haut de page

Notes

1 The difference between the controlled experimental set up and the untidy and relatively uncontrolled social observation often gives rise to friction between psychologists and sociologists. In this case, however, both contributors were open-minded enough to admire the work of the other and the experimental/natural divide did not play a significant part in the debate.

2 AR: As derived from taciturn.

3 In TEK this criterion is what marks out “knowers” from non-knowers (knowers are not the same as entities that have knowledge—they must also be able to reflect on it). Initially, in this paper, Collins proposed the “vegetarian test”—to qualify to be a reflective species, sub-groups within it must be capable of self-consciously choosing to change their diet. The tacit-to-explicit test serves the same purpose but fits better with the topic of tacit knowledge and the notion of “knower” discussed in TEK.

4 Another influence on Collins’s approach to the debate is his later critique of artificial intelligence [Collins 2010], [Collins & Kusch 1998].

5 Collins found Nagel’s article on being a bat disappointing [Nagel 1974]. Reber liked it.

6 In one exchange, Reber noted that there is research in perceptual and cognitive psychology that coordinates with Collins’s speculations. Irv Biederman developed a sophisticated model of perception based on 2D and 3D primitive forms like circles, cones and ellipses called “geons” that form the components of complex objects [Biederman 1987]. Real and stylized houses, such as in Collins’s figures, are easily abstracted as composed of just such forms. Biederman’s theory uses a surprisingly small number of geons (less than 40) to account for virtually all relatively fixed objects.

7 Collins recalls another instance when philosopher Martin Kusch and he spent months talking across each other while writing The Shape of Actions. The reason, as they eventually realised, was that they were using the central term “action” in different ways. To Kusch an action could be something accidental so long as it had consequences—the sort of things that law courts are interested because it is necessary to explore them in order to assign culpability. To Collins an action had to represent a society—it had to be something like taking out a mortgage or divining a witch—an accidental happening could not be an action however consequential it turned out to be.

8 The term “mismatched saliences” (used for the negative version) is drawn from Collins’s discussion of tacit knowledge (e.g., [TEK, 96]). Regarding positive mismatched saliences, in face-to-face conversation a quick technical comment can reveal that one of the parties already understands a set of issues so it is appropriate to move on. Collins uses the technique in interviews with scientists.

9 Locke said, you do not own something unless you mix your labour with it and this applies to concepts too. That is why good educational systems teach concepts via essays and seminars and why socialization is such an important part of education and essential to genuine interdisciplinary work. Collins once took a leading part on a seminar for natural science faculty that went smoothly for two years, the only problem being that the participants tended to ask the same questions at the end of the two years that they had asked at the beginning. Day-to-day everyone understood everything that was being argued, but it slipped away because they were not mixing their labour with it.

Haut de page

Pour citer cet article

Référence papier

Harry Collins et Arthur Reber, « Ships that Pass in the Night: Tacit Knowledge in Psychology and Sociology »Philosophia Scientiæ, 17-3 | 2013, 135-154.

Référence électronique

Harry Collins et Arthur Reber, « Ships that Pass in the Night: Tacit Knowledge in Psychology and Sociology »Philosophia Scientiæ [En ligne], 17-3 | 2013, mis en ligne le 01 octobre 2016, consulté le 28 mars 2024. URL : http://journals.openedition.org/philosophiascientiae/893 ; DOI : https://doi.org/10.4000/philosophiascientiae.893

Haut de page

Auteurs

Harry Collins

School of Social Sciences, Cardiff University, Wales (UK)

Articles du même auteur

Arthur Reber

Brooklyn College and the Graduate Centre of the City University of New York (USA)
University of British Columbia, Vancouver (Canada)

Haut de page

Droits d’auteur

Le texte et les autres éléments (illustrations, fichiers annexes importés), sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search