Sunday, December 13, 2009

-Page 69-

Some philosophers object to intention-based semantics only because they think it precludes a dependence of thought on the communicative use of language. This is a mistake. Even if intention-based semantic definitions are given a strong reductionist reading, as saying that public-language semantic properties (i.e., those semantic properties that supervene on us in communicative behaviour) just are psychological properties. It might still be that one could not have propositional attitudes unless one had mastery of a public language. The idea of supervenience is usually thought to have originated in moral theory, in the works of such philosopher s as G.E. Moore and R.M. Hare, nonetheless, Hare, for example, claimed that ethical predicates are ‘supervenient predicates’ in the same sense that no two things (persons, acts, states of affairs) could be exactly alike in all descriptive or naturalistic respects but unlike in that some ethical predicate (‘good’, right’, etc.) truly applies to one but not to the other. That is, there could be no difference in a moral respect without a difference in some description, or non-moral respect. following Moore and Hare, from whom they avowedly borrowed the idea of supervenience, Davidson went on to assert that supervenience in the sense is consistent with the irreducibility of the supervenient to their ‘subvenient’, or ‘base’, properties. ‘Dependence or supervenience of this kind does not entail reducibility through law or definition . . .’.Thus, three ideas have come to be closely associated with supervenience: (1) ‘Property covariation’ (if two things are indiscernible in base properties, they must be indiscernible in supervenience properties). (2) ‘Dependence’ (supervenient properties are dependent on, or determined by, their subvenient bases, and (3) ‘Non-reducibility’ (property covariation and dependence involved in supervenience can not reducible to their base properties). Whether or not this is plausible (that is, a separate question), it would be no more logically puzzling that the idea that one could not have propositional attitudes unless one had ones with certain sorts of content, Tyler Burge’s insight, that the contents of one’s thoughts is partially determined by the meaning of one’s words on one’s linguistic community is perfectly consistent with any intention-based semantics, reduction of the semantic to the psychological. Nevertheless, there is reason to be sceptical of the intention-based semantic programme.


So the most reasonable view about the actual-language relation is that it requires language users to have certain propositional attitudes, but there is no prospect of defining the relation wholly in terms of non-semantic propositional attitudes. It is further plausible that any account of the actual-language relation ,must appeal to speech acts sch as speaker meaning, where the correct account of these speech acts is irreducibly semantic (they will fail to supervene on the non-semantic propositional attitudes of speakers in the way that intentions fail to supervene on an agent’s beliefs and desires). Is it possible to define the actual-language relation, and if so, will any irreducibly semantic notions enter into that definition other than the sorts of speech act notions already alluded to? These questions have not been much discussed in the literature, there is neither an established answer nor competing schools of thought. However, the actual-language relation is one of the few things in philosophy that can be defined, and that speech act notions are the only irreducibly semantic notions the definition must appeal to (Schiffer, 1993).

An substantiated dependence of thought on language seems unobtainably approachable, however, a useful point is an acclaimed dependence that propositional attitudes are relations to linguistic items which obtain, in, at least, in part, by virtue of the content those items have among language users. This position does not imply that believers have to be language users, but it does make language an essential ingredient in the concept of belief. The position is motivated by two considerations: (a) The supposition that believing is a relation to thing believed, which things have truth values and stand in logical relations to one another, and (b) the desire not to take things believed to be propositions - abstract, mind and language-independent objects that have essentially the truth conditions they have. As to say that (as well motivated: The relational construal of propositional attitudes is probably the best way to account for the quantification in ‘Harvey believes something irregular about you’. But there are problems with taking linguistic items, than propositions, as the objects of belief. In that, if ‘Harvey believes that irregularities are founded grounds held to abnormality’ is represented along the lines of Harvey, and abnormal associations founded to irregularity, then one could know the truth expressed by the sentence about Harvey without knowing the content of his belief: For one could know that he stands in the belief relation to ‘irregularities are abnormal’ without knowing its content. This is unacceptable, as if Harvey believes that irregularity stems from abnormality, then what he believes - the reference of ‘That irregularity is abnormal’ - is that irregularities are abnormal. But what is this thing, that irregularities are abnormal? Well, it is abstract, in that it has no spatial locality: It is mind and language independent, in that it exists in possible world in which whose displacement is neither the thinkers nor speakers, and necessarily, it is true iff irregularly is abnormal. In short, it is a proposition - an abstract mind- and-language thing that has a truth condition and has essentially the truth condition it has.

A more plausible way that thought depends on language is suggested by the topical thesis that we think in a ‘language of thought’. As, perhaps, this is nothing more than the vague idea that the neural states that realize our thoughts ‘have elements and structure in a way that is analogous to the way in which sentences have elements and structure’. But we can get a more literal rendering by relating it to the abstractive conception of language already recommended. On this conception, a language is a function from ‘expressions’ - sequence of marks or sounds or neural states or whatever - onto meanings, which meanings will include the propositions our propositional-attitude relations relates us to. We could then read the language of thought hypothesis as the claim that having in a certain relation to a language whose expressions are neural states. There would mow be more than one ‘actual-language relation’. One might be called the ’public-language relation’, since it makes a language the instrument of communication of a population of speakers. Another relation might be called the ‘language-of-thought relation’ because standing in the relation to a language makes it one’s ‘Lingus mentis’. Since the abstract notion of a language has been so weakly construed, it is hard to see how the minimal language-of-thought proposal just sketched could fail to be true. At the same time, it has been given no interesting work to do. In trying to give it more interesting work, further dependencies of thought on language might come into play. For example, it has been claimed that the language of thought of a public-language user is the public language she uses: her neural sentences in something like her spoken sentences. For another example, it might be claimed that even if one’s language of thought is distinct from one’s public language, the language-of-thought relation makes presuppositions about the public-language relation in ways that make the content of one’s thoughts dependent on the meaning of one’s words in one’s public-language community.

All of this suggests a specific ‘mental organ’, to use Chomsky’s phrase, that has evolved in the human cognitive system specifically in order to make language possible. The specific structure of this organ simultaneously constrains the range of possible human languages and guides the learning of the child’s target language, later ,making rapid on-line language processing possible. The principles represented in this organ constitute the innate linguistic knowledge of the human being. Additional evidence for the early operation of such an innate language acquisition module is derived from the many infant studies that show that infants selectively attend to sound-streams that are prosodically appropriate, that have pauses at clausal boundaries, and that contain linguistically permissible phonological sequences.

A particularly strong form of the innateness hypothesis in the psycholinguistic domain is Fodor’s (1975, 1987), ‘Language of Thought’ hypothesis. Fodor argues not only that the language learning and processing faculty is innate, but that the human representational system exploits an innate language of thought which has all of the expressive power of any learnable human language. Hence, he argues, all concepts are in fact innate, in virtue of the representational power of the language of thought. This remarkable doctrine is hence even stronger than classical rationalist doctrine of innate ideas: Whereas, Chomsky echoes Descartes in arguing that the most general concepts required for language learning are innate, while allowing that more specific concepts are acquired, Fodor echoes Plato in arguing that every concept we ever ‘learn’ is in fact innate.

Fodor defends this view by arguing that the process of language learning is a process of hypothesis formation and testing, where among the hypotheses that must be formulated are meaning postulates for each term in the language being acquired. But in order to formulate and test a hypothesis of the form ‘χ’ means ‘y’, where ‘χ’ denotes a term in the target language, prior to the acquisition of that language, the language learner. Fodor argues, must have the resources necessary to express ‘y’. Therefore, there must be, in the language of thought, a predicate available co-extensive with each predicate in any language that a human can learn. Fodor also argues for the language of thought thesis by noting that the language in which the human information cannot be a human spoken language, since that would, contrary to fact, privilege one of the world’s languages as the most easily acquired. Moreover, it cannot be, he argues, that each of us thinks in our own native language since that would (a) predict that we could not think prior to acquiring a language, contrary to the original argument, and (b) would mean that psychology would be radically different for speakers of different languages. Hence, Fodor argues, there must be a non-conventional language of thought, and the facts that the mind is ‘wired’ in mastery of its predicates together with its expressive completeness entail that all concepts are innate.

The dissertating disputation about whether there are innate qualities that infer on or upon the innate values whereby ideas are much older than previously imagined. Plato in the ‘Meno’ (the learning paradox), famously argues that all of our knowledge is innate. Descartes (1596-1650) and Leibniz (1646-1716) defended the view that the mind contains innate ideas: Berkeley (1685-1753), Hume (1711-76) and Locke (1632-1704) attacked it. In fact, as we now conceive the great debate between European Rationalism and British empiricism in the seventeenth and eighteenth centuries, the doctrine of innate ideas is a central effectuality of contention: Rationalists typically claim that knowledge is impossible without a significant stock of general innate ‘concepts’ or judgements, empiricists argued that all ideas are acquired from experience. This debate is replayed with more empirical content and with considerably greater conceptual complexities in contemporary cognitive science, most particularly within the domain of psycholinguistic theory and cognitive developmental theory. Although Chomsky is recognized as one of the main forces in the overthrow of behaviourism and in the initiation of the ‘cognitive era’ ! His relation between psycholinguistics and cognitive psychology has always been an uneasy one. The term ‘psycholinguistics’ is often taken to refer primarily to psychological work on language that is influenced by ideas from linguistic theory. Mainstream cognitive psychologists, for example when they write textbooks, oftentimes prefer the term ‘psychology of language’ the difference is not, however, merely in a name, least be of mention, that both Fodor and Chomsky, who argue that all concepts, or all of linguistic knowledge is innate, lend themselves to this interpretation, against empiricists who argue that there is no innate appeal in explaining the acquisition of language or the facts of cognitive development. But this debate would be a silly and a sterile for obvious reasons, something is innate. Brains are innate, and the structure of the brain must constrain the nature of cognitive and linguistic development to dome degree. Equally obviously, something is learned and is learned as opposed to merely grown as limbs or hair grow. For not all of the world’s citizens end up speaking English, or knowing the Special Theory of Relativity. The interesting questions then all concern exactly what is innate, to what degree it counts as knowledge, and what is learned, and what degree its content and structure are determined by innately specified cognitive structures. And that is plenty to debate about.

Innatist argue that the very presence of linguistic universals argue for the innateness of linguistic knowledge, but more importantly and more compelling that the fact that these universals are, from the standpoint of communicative efficiency, or from the standpoint of any plausible simplicity criterion, adventitious. There are many conceivable grammars, and those determined by universal grammar are not ipso facto the most efficient or the simplest. Nonetheless, all human language satisfy the constraints of universal grammar. Since neither the communicative environment nor the commutative task can explain this phenomenon. It is reasonable to suppose that it is explained by the structure of the mind - and, therefore, by fact that the principles of universal grammar lie innate in the mind and constrain the language that a human can acquire.

Linguistic empiricists, answer that there are alternative possible explanations of the existence of such adventitious universal properties of human languages. For one thing, such universals could be explained, Putnam (1975, 1992) argues, by appeal to a common ancestral language, and the inheritance of features of that language by its descendants. Or it might turn out that despite the lack of direct evidence at present the features of universal grammar in fact do serve either the goals of communicative efficacy or simplicity according to a metric of psychological importance. Finally, empiricist point out , he very existence of universal grammar might be a trivial logical artefact (Quine, 1968): for one thing, any finite set of structures will have some feature s in common. Since there are a finite number of languages, it follows trivially that there are features they all share. Moreover, it is argued, many features of universal grammar are interdependent. So in fact the set of functional principles shared by the world’s languages may be rather small. Hence, even if these are innately determined, the amount of innate knowledge thereby required may be quite small as compared with the total corpus of general linguistic knowledge acquired by the first language learner.

These replies are rendered less plausible, innatists argue, when one considers the fact that the errors language learners make in acquiring their first language seem to be driven far more by abstract features of grammar than by any available input data. So, despite receiving correct examples of irregular plurals or past tense forms for verbs, and despite having correctly formed the irregular forms for those words, children will often incorrectly regularize irregular verbs once acquiring mastery of the rule governing regulars in their language. And in general, not only the correct inductions of linguistic rules by young language learners, but more importantly, given the absence of confirmatory data and the presence of refuting data, children’s erroneous inductions are always consistent with universal grammar, often simply representing the incorrect setting of a parameter in the grammar. More generally, innatists argue, that all grammatical rules that have ever been observed satisfy the structure-dependence constraint. That is, many linguists and psycholinguists argue that all known grammatical rules of all the world’s languages, including the fragmentary languages of young children must be stated as rules governing hierarchical sentence structures, and not governing, say, sequence of words. Many of these, such as the constituent-command constraint governing anaphor, are highly abstract indeed, and appear to be respected by even very young children (Solan, 1983 & Crain, 1991). Such constraints may, innatists argue, be necessary conditions of learning natural language I the absence of specific instruction, modelling and correction conditions in which all first language learning acquire their native languages.

An important empiricist answer for these observations derives from recent studies of ‘connectionist’ models of the first language acquisition (Rummelhart & McClelland, 1986, 1987). Connectionist systems, not previously trained to represent any sunset of universal grammar that induce grammar which include a large set of regular forms and a few irregulars also tend to over-regularize, exhibiting the same U-shape learning curve seen in human language acquirers. It is also noteworthy that conceptionist learning systems that induce grammatical systems acquire ‘accidentally’ rules on which they are not explicitly trained, but which are consistent with those upon which they are trained, suggesting that s children acquire position of their grammar, they may accidentally ‘learn’ other consistent rules, which may be correct in other human language, but which then must be ‘unlearned’ in their home language. Yet, such ‘empiricist’ language acquisition systems have yet to demonstrate their ability to induce a sufficiently wide range of the rules hypothesized to be comprised by universal grammar to constitute a definite empirical argument for the possibility of natural language acquisition in the absence of a powerful set of innate constraints.

The poverty of the stimulus argument has been of enormous influence in innateness debates, though its soundness is hotly contested. Chomsky notes that (1) the examples of the target language to which the language learner is exposed are always jointly compatible with an infinite number of alternative grammars, and so vastly undermine the grammar, of the language, and (2) the corpus always contains many examples of ungrammatical sentences, which should in fact, serve as falsifiers of any empirically induced correct grammar of the language, also (3) there is, in general, no explicit reinforcement of correct utterances or correction of incorrect utterances, either by the learner or by those in the immediate training environment. Therefore, he argues, since it is impossible to explain the learning of the correct grammar - a task accomplished by all normal children within a very few years - on the basis of any available data or known learning algorithms, it must be that the grammar is innately specified, and is merely ‘triggered’ by relevant environmental cues.

Opponents of the linguistic innateness hypothesis, however, point out that the circumstance that Chomsky notes in this argument is hardly specific to language. As well known from arguments due to Hume (1978). Wittgenstein (1953), Goodman (1972) and Kripke (1982), in all cases of empirical abduction, and of training in the use of a word, data under-determine theories. This moral is emphasized by Quine (1954, 1960) as the principle of the undertermination of theory by data. But we, nonetheless, do abduce adequate theories in science, and we do lean the meaning of words. And it would be bizarre to suggest that all correct scientific theories or the facts of lexical semantics are innate.

But, innatists reply, that when the empiricist relies on the underdetermination of theory by data as a counterexample, a significant disanalogousness with language acquisition is ignored: The abduction of scientific theories is a difficult, labourious process, taking a sophisticated theorist a great deal of time and deliberate effort. First language acquisition, by contrast, is accomplished effortlessly and very quickly by a small child. The enormous relative ease with which such a complex and abstractive domain is mastered by such a naïve ‘theorist’ is evidence for the innateness of the knowledge achieved.

Empiricists such as Putnam (1926- ) have rejoined that innateness under-estimate the amount of time that language learning actually takes, focussing only on the number of years from the apparent onset of acquisition to the achievement of relative mastery over the grammar. Instead of noting how short this interval, they argue, one should count the total number of hours spent listening to language and speaking during this time. That number is in fact, quite large, and is comparable to the number of hours of study and practice required in the acquisition of skills that are not argued to derive from innate structures, such as chess playing or musical composition, hence, they argue once the correct temporal parameters are taken into consideration, language learning looks like one more case of human skill acquisition than like a special unfolding of innate knowledge.

Innatists, however, note that while the ease with which most such skills are acquired depends on general intelligence, language, is learned with roughly equal speed, and to roughly the same level of general syntactic mastery regardless of general intelligence. In fact, even significantly retarded individuals, assuming no special language deficit, acquire their native language on a time-scale and to a degree comparable to that of normally intelligent children. The language acquisition faculty hence, appears to allow access to a sophisticated body of knowledge independent of the sophistication of the general knowledge of the language learner. This is, language learning and utilization mechanisms are not outside of language processing. They are informationally encapsulated - only linguistic information is relevant to language acquisition and processing. They are mandatory - language learning and language processing are automatic. Moreover, language is subserved by specific dedicated neural structures, damage to which predictably and systematically impairs linguistic functioning, and not general cognitive functioning.

Again, the issues at stake in the debate concerning the innateness of such general concepts pertaining to the physical world cannot be s stark a dispute between an innate and one according to which all empirical knowledge is innate. Rather the important - and again, always empirical questions concern just what is innate, and just ‘what’ is acquired, and how innate equipment interacts with the world to produce experience. ‘There can be no doubt that all our knowledge begins with experience . . . experience it does not follow that all arises out of experience’.

Philosophically, the unconscious mind postulated by psychoanalysis is controversial, since it requires thinking in terms of a partitioned mind and applying a mental vocabulary (intentions, desires, repression) to a part to which we have no conscious access. The problem is whether this merely uses a harmless spatial metaphor of the mind, or whether it involves a philosophical misunderstanding of mental ascription. Other philosophical reservations about psychoanalysis concern the apparently arbitrary and unfalsifiable nature on the interpretative schemes employed. Basically, least of mention, the method of psychoanalysis or psychoanalytic therapy for psychological disorders was pioneered by Sigmund Freud (1856-1939), the method relies on or upon an interpretation of what a patient says while ‘freely associating’ or reporting what comes to mind in connection with topics suggested by the analyst. The interpretation proceeds according to the scheme favoured by the analyst, and reveals ideas dominating the unconscious, but previously inadmissible to the conscious mind of the subject. When these are confronted, improvement can be expected. The widespread practice of psychoanalysis is not matched by established data on such rate of improvement.

Nonetheless, the task of analysing psychoanalytic explanation is complicated is initially in several ways. One concerns the relation of theory to practice. There are various perspectives on the relation of psychoanalysis, the therapeutic practice, to the theoretical apparatus built around it, and these lead to different views of psychoanalysis’ claim to cognitive status. The second concerns psychoanalysis’ legitimation. The way that psychoanalytic explanation is understood has immediate implications for one’s view of its truth or acceptability, and this of course a notoriously controversial matter. The third is exegetical. Any philosophical; account of psychoanalysis must of course start with Freud himself, but it will inevitably privilege some strands of his thought at the expense of others, and in so doing favour particular post-Freudian developments over others.

Freud clearly regarded psychoanalysis as engaged principally in the task of explanation, and held fast to his claims for its truth in the course of alterations in his view of the efficacy of psychoanalysis’ advocates have, under pressure, retreated to the view that psychoanalytic theory has merely instrumental value, as facilitating psychoanalytic therapy: But this is not the natural view, which is that explanation is the autonomous goal of psychoanalysis, and that its propositions are truth-evaluable. Accordingly, it seems that preference should be given to whatever reconstruction of psychoanalytic theory does most to advance its claim to truth. Within, of course, exegetical constraints (what a reconstruction offers must be visibly present in Freud’s writings.)

Viewed in these terms, psychoanalytic explanation is an ‘extension’ of ordinary psychology, one that is warranted by demands for explanation generated from within ordinary psychology itself. This has several crucial ramifications. It eliminates, as ill-conceived, the question of psychoanalysis’ scientific status - an issue much discussed, as proponents of different philosophies of science have argued for and against psychoanalysis’ agreement with the canons of scientific method, and its degree or lack of correspondence. Demands that psychoanalytic explanation should be demonstrated to receive inductive support, commit itself to testable psychological laws, and contribute effectively to the prediction of action, have then no more pertinence than the same demands pressed on ordinary psychology - which is not very great. When the conditions for legitimacy are appropriately scaled down. It is extremely likely that psychoanalysis succeeds in meeting hem: For psychoanalysis does deepen our understanding of psychological laws, improve the predictability of action in principle, and receive inductive support on the special sense which is appropriate to interpretative practices.

Furthermore, to the extent that psychoanalysis may be seen as structured by and serving well-defined needs for explanation, there is proportionately diminished reason for thinking that its legitimation turns on the analysand’s assent to psychoanalytic interpretation, or the transformative power (whatever it may be) of these. Certainly it is true that psychoanalytic explanation has a reflective dimension lacked by explanations in the physical sciences: Psychoanalysis understands its object, the mind, in the very terms that the mind employs in its unconscious workings (such as its belief in its own omnipotence). But this point does not in any way count against the objectivity of psychoanalytic explanation. It does not imply that what it is for a psychoanalytic explanation to be true should be identified, pragmatically, with the fact that an interpretation may, for the analysand who gains self-knowledge, have the function of translating their directed-causes to set about unconscious mentality into a proper conceptual form. Nor does it imply that psychoanalysis’ attribution of unconscious content needs to be understood in anything less than full-bloodedly realistic terms. =truth in psychoanalysis may be taken to consist in correspondence with an independent mental reality, a reality that is both endorsed with ‘subjectivity’ and in many respects puzzling to its owner.

In the twentieth-century, the last major, self-consciously naturalistic school of philosophy was American ‘pragmatism’ as exemplified particularly in the works of John Dewey (1859-1952). The pragmatists replaced traditional metaphysics and epistemology with theories and methods of the sciences, and grounded their view of human life in Darwin’s biology. Following the second world war, pragmatism was eclipsed by logical positivism and what might be called ‘scientific’ positivism, a philosophy of science as the defining characteristic of all scientific statements. Ernst Mach is frequently regarded as the founder of logical positivism, however, in his book The Conservation of Energy, that only the objects of sense experience have any role in science: The task of physics is ‘the discovery of the laws of the connection of sensations (perceptions): And ‘the intuition of space is bound up with the organization of the senses . . . (so that) we are not justified in ascribing spatial properties to things which are not perceived by the senses’. Thus, for Mach, our knowledge of the physical world is derived entirely from sense experience, and the content of science is entirely characterized by the relationships among the data of our experience.

Nevertheless, pragmatism is a going concern in philosophy of science. It is often aligned with he view that scientific theories are not true or false, but are better or worse instruments for prediction and control. For Charles Peirce (1839-1914) identifies truth itself with a kind of instrumentality. A true belief is the very best we could do by way of accounting for the experiences we have, predicting the future course of experience, etc.

Peirce (1834-1914) called the sort of inference which concludes that all A’s are B’s because there are no known instances to the contrary ‘crude induction’. It assumes that future experience will not be ‘utterly at variance’ with past experience. This is, Peirce says, the only kind of induction in which we are able to infer the truth of a universal generalization. Its flaw is that ‘it is liable at any moment to be utterly shattered by a single experience’, that is to say, that warranted belief is possible only at the observational level. Induction tells us what theories are empirically successful, and thereby what explanations are successful. But the success of an explanation cannot, for historical reasons, be taken as an indicator of its truth.

The thesis that the goal of inquiry is permanently settled belief, and the thesis that the scientific attitude is a disinterested desire for truth, are united by Peirce’s definition of ‘true’. He does not think it false to say that truth is correspondence to reality, but shallow - a merely nominal definition, giving no insight into the concept. His pragmatic definition identifies the truth with the hypothetical ideal, which would be the final outcome of scientific inquiry were it to continue indefinitely. ‘Truth is that concordance of . . . [a] statement beliefs’: any truth more perfect than this destined conclusion, any reality more absolute than what is thought in it, is a fiction of metaphysics’. These reveal something both of the subtlety and of the potential for tension, without Peirce’s philosophy. His account of reality aims at a delicate compromise between the undesirable extremes of transcendentalism and idealism, his account of truth at a delicate compromise between the twin desiderata of objectivity and (in-principle) accessibility.

The question of what is and what is not philosophy is not a simply a query of classification. In philosophy, the concepts with which we approach the world themselves become the topic of enquiry. A philosophy of a discipline such as history, physics, or law seeks not so much to solve historical, physical, or legal questions, as to study the concepts that structure such thinking,. And to lay bare their foundations and presuppositions. In this sense philosophy is what happens when a practice becomes self-conscious. The borderline between such ‘second-order’ reflection, and, ways of practising the first-order discipline itself, is not always clear: Philosophical problems may be tamed by the advance of a discipline, and the conduct of a discipline may be swayed by philosophical reflection. But the doctrine neglects the fact that self-consciousness and reflection co-exist with activity. At different times there has been more or less optimism about the possibility of a pure or ‘first’ philosophy, taking from the stand-point from which other intellectual practices can be impartially assessed and subjected to logical evaluation and correction, in that he task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much ‘positivist’ philosophy of science, few philosophers now subscribe to it. The contemporary spirit of the subject is hostile to any such possibility, and prefers to see philosophical reflection as continuous with the best practising employment of intellectual fields of rationalizations intended reasons for enquiry.

Nonetheless, the last two decades have been an intermittent interval of extraordinary change in psychology. Cognitive psychology, which focuses on higher mental processes like reasoning, decision making, problem solving, language processing and higher-level visual processing, has become a - perhaps the - dominant paradigm among experimental psychologists, while behaviouristic oriented approaches have gradually fallen into disfavour. Largely as a result of this paradigm shift, the level of interaction between the disciplines of philosophy and psychology has increased dramatically.

One of the central goals of the philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies exploited in the sciences. Another common goal is to construct philosophically illuminating analyses or explications of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory and on such crucial conceptual perspectives proposed in biological function.

Typically, a functional explanation in biology says that an organ χ’ is present in an animal because ‘χ’ has function ‘F’. What does that mean?

Some philosophers maintain that an activity of an organ counts as a function only if the ancestors of the organ’s owner were naturally selected, partly because they had similar organs that performed the same activity. Thus, the historical-causal property, having conferred a selective advantage, is not just evidence that ‘F’ is a function, it is constitutive of F’s being purposively functional.

If this reductive analysis is right, a functional explanation turns out to be sketchy causal explanation of the origin of ‘χ’. It makes the explanation scientifically respectable, ‘because’ it indicates a weak relation of partial causal contribution.

However, this construal is not satisfying intuitively. To say that ‘χ’ is present because it has a function is normally taken to mean, roughly, that ‘χ’ is present it is supposed to do something useful. Yet, this normal interpretation immediately makes the explanation scientifically problematic, because the claim that ‘χ’ is supposed to do something useful appears to be normative and non-objective.

The philosophy of physics is another area in which studies of this sort have been actively pursued. In undertaking this work, philosophers need not ans do not assume that there is anything wrong with the science they are studying. Their goal is simply to provide accounts of the theories, concepts and explanatorial strategies that scientists are using - accounts that are more explicit, systematic and philosophically sophisticated than the often rather rough-and-ready accounts offered by the scientists themselves.

This account of intentionality is characteristic to perception and action, so that the paradigms that are usually founded of belief or sometimes beliefs and desires are key to understanding intentionality whose representation in a special sense of that word that we can explain intentional states in general, as having both a propositional content and a psychological mode, and the psychological mode which determines the direction with which the intentional state represents its conditions of satisfaction. These considerations are characteristic of all those intentional states with propositional content which do not have a mind-to-world or world-to-mind direction: All of these contain beliefs and desires, and the component beliefs and desires do have an initial direction of fit.

Once, again, of intentionality that the paradigm cases discussed are usually beliefs or sometimes beliefs and desires. However, the biologically most basic forms of intentionality are in perception and intentional action. These also have certain formal features which are not common to beliefs and desires. Consider a case of perception. Suppose I see my hand in front of my face. What are the conditions of satisfaction? First, the perceptual experience of the hand in front of my face has as its condition of satisfaction that there is a hand in front of my face. Thus far the condition of satisfaction is the same as the belief that there is a hand in front of my face. Bu t with perceptual experience there is this difference: In order that the intentional content be satisfied, the fact that there is a hand in front of my face must cause the very experience whose intentional content is that there is a hand in front of my face. This has the consequence that perception has a special kind of condition of satisfaction that we might describe as ‘causally self-referential’. The full conditions of satisfaction of the perceptual experience are, first, that there be a hand in front of my face, and second, that there is a hand in front of my face caused the very experience of whose conditions of satisfaction it forms a part. We can represent this in our canonical form as:

Visual experience (that there is a hand in front of my face

` and the fact that there is a hand in front of my face is causing

this very experience.)

Furthermore, visual experience have a kind o conscious immediacy not characteristic of beliefs and desires. A person can literally be said to have beliefs and desires while sound asleep. But one can only have visual experiences of a non-pathological kind when one is fully awake and conscious because the visual experience are themselves forms of consciousness.

Event memory is a kind of halfway house between the perceptual experience and the belief. Memory, like perceptual experience Has the causally self-referential feature. Unless the memory is caused by the event, of which it is the memory. It is not a case of satisfied memory, but unlike the visual experience, it need not be conscious. One can be said to remember something while sound asleep. Beliefs, memory and perception all have the mind-to-world direction and memory and perception have the world-to-mind direction of causation.

Increasingly, proponents of the intentional theory of perception argue that perceptual experience is to be differentiated from belief not only in terms of attitude, but also in terms of the kind of content the experience is an attitude towards ascribing contents to be in a certain set-class of content-involving states is for attributes of these states to make the subject as rationally intelligible as possibility, in the circumstances. In one form or another, this idea is found in the writings of Davidson (1917-2003), who introduced the position known as ‘anomalous monism’ in the philosophy of mind, instigating a vigorous debate over the relation between mental and physical descriptions of persons, and the possibility of genuine explanation of events in terms of psychological properties. Although Davidson is a defender of the doctrine of the ‘indeterminacy of radical translation and the ‘indisputability of references, his approach has seemed to many to offer some hope of identifying meaning as a respectable notion, even within a broadly ‘extentionalized’ approach to language. Davidson is also known for rejection of the idea of a ‘conceptual scheme’, thought of as something peculiar to one language or one way of looking at the world, arguing that where the possibility of translation stops so does the coherence of the idea that there is anything to translate.

Intentional action has interesting symmetries and asymmetries to perception. Like perceptual experiences, the experiential component of intentional action is causally self-referential. If, for example, I am now walking to my car, then the condition of walking to my car, then experience is that satisfaction of the present experience is that there be certain bodily movements, and that this very experience of acting cause those bodily movements. What is more, like perceptual experience, the experience of acting is typically a conscious mental event. However, unlike the perception memory, the direction of the experience of acting is world-to-mind. My intention will only be fully carried out if the world changes so as to match the content of the intention (hence world-to-mind direction (hence world-to-mind proves directional) and the intention will only be fully satisfied if the intention itself causes the rest of the condition of satisfaction, hence, mind-to-world direction of causation.

Increasingly, proponents of the intentional theory of perception argue that perceptual representational experience is to be differentiated from belief not only in terms of attitude, but, in terms of the kind of content that experience is an attitude toward a better understanding a person’s reasons for the array of emotions and sensations to which he ids subject: What he remembers and what he forges, and how he reasons beyond the confines of minimal rationality. Even the content-involving perceptual states, which take into consideration, a fundamental role in individuating content. This, however, cannot be understood purely in terms relational to minimal rationality. A perception of the world as being a certain way is not, and could not be, under a subject’s rational control. Though it is true and rational that perceptions give reasons for forming beliefs, the beliefs for which they fundamentally provide reasons - observational beliefs about the environment - have contents which can only be elucidated by referring back to perceptual representations belonging of experience. In this respect (as in others), perceptual states differ from those beliefs and desires that are individuated by mentioning that they provide reasons for judging or doing: For frequently, these latter judgements and actions can be individuated without reference back to the states that provide reasons for them.

We are acutely aware of the effects of our own memory, its successes and its failures, so that we have the impression that we know something about how it functionally operates. But, with memory, as with most mental functions, what we are aware of is the outcome of its operation and not the operation itself. To our introspections, the essence of memory is language based and intentional. When we appear as a witness in court then the truth, as we are seen to report it is what we say about what we intentionally retrieve. This is, however, a very restricted view o memory albeit, with a distinguished history. William James (1842-1910), an American psychologist and philosopher, whose own emotional needs gave him an abiding interest in problems of religion, freedom, and ethics: The popularity of these themes and his lucid and accessible style made James the most influential American philosopher of the beginning of the 20th century. Nonetheless, James said, that ‘Memory proper is the knowledge of a former state of mind after it has already once dropped from consciousness, or rather it is the knowledge of an event, or fact, of which meantime we have not been thinking, with the additional consciousness that we have thought or experienced it before’.

One clue to the underlying structure of our memory system might be its evolutionary history. We have no reason to suppose that a special memory system evolved recently or to consider linguistic aspects of memory and intentional recall as primary. Instead, we might assume that such features are later additions to a much more primitive filing system. From this perspective one would view memory ass having the primary function of enabling us (the organism as a whole, that is, not the conscious self) to interpret the perceptual world and helping us to organize our responses to changes that place in the world.

Considerations or other aspects in the content of memory are those with which contain the capacity to remember: to (1) recall past experiences, and (2) retain knowledge that was acquired in the past. It would be a mistake to omit (1), for not any instance of remembering something is an instance of retaining knowledge. Suppose that as a young child you saw the Sky Dome in Toronto, but you did not know at the time which building it was. Later you learn what the Sky Dome is, and you remember having seen it when you were a child. This is an example of obtaining knowledge of a past fact - by recalling a past experience, but not an example of retaining knowledge because at the time you were seeing it you did not know you were since you did not know what the Sky Dome was or represented. Furthermore, it would be a mistake to omit (2), for not any instance of remembering something is an instance of recalling the past, let alone a past experience. For example, by remembering my telephone number, I retain knowledge of a past fact, and by remembering the date of the next elections, of a future fact.

According to Aristotle (De Memoria), memory cannot exist without imagery: We remember past experiences by recalling images that represent therm. This theory - the representative theory of memory - was also held by David Hume and Bertrand Russell (1921). It is subject to three objections, the first of which was recognized by Aristotle himself. That if what I remember is an image present to me now, how can it be that what I remember belongs to the past, how can it be that it is an image now present to my mind? According to the second objection, we cannot tell the difference between images that represent actual memories and those that are mere figments of the imagination. Hume suggested two criteria to distinguish between these two kinds of images, vivacity and orderliness, and Russell a third, an accompanying feeling of familiarity. Critics of the representative theory would argue that these criteria are not good enough, that they do not allow us to distinguish reliably between true memories and mere imagination. This objection is not decisive, as it only calls for a refinement of the proposed criteria. Nevertheless, the representative theory succumbs to the third objection, which is fatal: Remembering something does not require an image. In remembering their dates of birth, or telephone numbers, people do not, at least not normally, have an image of anything. In developing an account of memory, we must, therefore, proceed without making images an essential ingredient. One way of accomplishing this is to take the thing that is remembered to be a proposition, the content of which may be about the past, present, or future. Doing so would provide us with an answer to the problem pointed out by Aristotle. If the position we remember is a truth about the past, then we remember the past by virtue of having a cognation of something present - the proposition that is remembered.

What, then, are the necessary and sufficient conditions of remembering a proposition, of remembering that ‘p’? To begin with, believing that ‘p’ is not a necessary condition, for at a given moment ‘t’, I, may not be aware of the fact that I still remember that ‘p’ and thus, do not believe that ‘p’ at ‘t’. It is possible that I remember that ‘p’ but, perhaps because I gullibly trust another person’s judgement, unreasonably disbelieve that ‘p’. It will, however,, be helpful to focus on the narrower question: Under which conditions is S’s belief that ‘p’ an instance of remembering that ‘p’? It is such an instance only if ‘S’ either (1) previously came to know that ‘p’, or (2) had an experience that put ‘S’ in a position subsequently to come to know that ‘p’. Call this the ‘original input condition’. Suppose, having learned in the past that 12 x 12 = 144 but subsequently having forgotten it. I now come to know again that 12 x 12 = 144 by using a pocket t calculator. Here the original input condition is fulfilled, but obviously this is not an example of remembering that 12 x 12 = 144. Thus, a further condition is necessary: For S’s belief that ‘p’ to be a case of remembering that ‘p’, the belief must be connected in the right way with the original input. Call this the ‘connection condition’. According to Carl Ginet (1988), the connection must be ‘epistemic’, at any time since the original input at which S acquires evidence sufficient for knowing that ‘p’, ‘S’ already knew that ‘p’. Critics would dispute that a purely epistemic account of the connection condition will suffice. They would insist that the connection be ‘causal’:” For ‘S’ to remember that ‘p’, there must be an uninterrupted causal chain connecting the original input with the present belief.

Not every case of remembering that ‘p’ is one of knowing that ‘p’, although I remember that ‘p’ I might not believe that ‘p’, and I might not be justified in believing that ‘p’, for I might have information that undermines or casts doubt on ‘p’. When, however, do we know something by remembering it? What are the necessary and sufficient conditions of knowing that ‘p’ on the basis of memory? Applying the traditional conception of knowledge, we may say that ‘S’ knows that ‘p’ on the basis of memory just in case (1) ‘S’ clearly and distinctly remembers that ‘p’: (2) ‘S’ believes that ‘p’ and (3) ‘S’ is justified in believing that ‘p’. (Since (1) entail ss that ‘p’ is true, adding a condition requiring p’s truth is not necessary.) Whether this account of memory knowledge is correct, and how it is to be fleshed out in detail, are questions which concern the nature of knowledge and epistemic justification in general, and thus, will give rise to much controversy.

Memory knowledge is possible only if memory is a source of justification. Common-sense assumes it is. We naturally believe that, unless there are specific reasons for doubt, we believe that we do remember that we seem to remember, unless it is undermined or even contradicted by our background beliefs. Thus, we trust that we have knowledge of the past, however, would argue that this trust is ill-founded. According to a famous argument by Bertrand Russell (1927), it is logically possible that the world sprang into existence five minutes ago, complete with our memories and evidence, since as fossils and petrified trees, suggesting a past of millions of years. If it is, then, there is no logical guarantee that we actually do remember what we seem to remember. Consequently, so the sceptics would argue, there is no reason to trust memory. Some philosophers have replied to this line of reasoning by trying to establish that memory is necessarily reliable, that it is logically impossible for the majority of our memory beliefs to be false. Alternatively, our commonsense view may be defended by pointing out that the unreasonable to trust memory - does not follow from its premise, memory fails to provide us with a guarantee that we seem to remember is true. For the argument to be valid, it would have to be supplemented with a further premise: For a belief to be justified, its justifying reason must guarantee its truth. Many contemporary epistemologists would dismiss this premise as unreasonably strict. One of the chief reasons for resisting it is that accepting it is harder more reasonable than our trust in particular, clear and vivid deliverance of memory. To the contrary, accepting these as true would actually appear less error prone than accepting an abstract philosophical principle which implies that our acceptance of such deliverance is justified.

These altering distinctions of forms of memory is a crude one, and seems uncategorized by the varying degrees of enabling such terms as ‘conscious’ and ‘explicit’ are so cloud-covered. Their shadowy implication, is well known, according to Schacter, McAndrews and Moscovitch, 1988, have in accordance with, the memory loss or amnesia is an inability to remember recent experiences (even from the very recent past) and to learn various but limited resultants amounts in types of information, and dilate upon features from selective brain damage that leaves perceptual, linguistic, and intellectual skills abounding with the overflowing emptiness of being and nothingness. Memory deficit misfunction have traditionally been studied using techniques designed to elicit explicit memories. So, for example, memory-loose persons in that these amnesic people, might be instructed or otherwise asked to think back to a learning episode and either recall information from that intermittent interval of their lives, or say whether a presented item had previously been encountered in the episodic period of learning. That being said, is that the very same persons who performed uncollectible afflicted in the loose of decayed or deadened or lifeless memory cells. The acquisition of skills is a case in point, and there is considerable experimental evidence showing the consensus of particular amnesic implications over a series of learning episodes. Although, a striking example is the densely amnesic unfortunates who learned how to use a personal computer over numerous sessions, despite declaring at the beginning of each session that he had never used a computer before. In addition to this sort of capacity to learn over a succession of episodes, amnesics have performed well on single-short-lived episodes (such as completing previously shown words given to phraselogic 3-letter cues). So just as these amnesic people clearly reveal the difference between conscious and nonconscious memory, but similar dissociations can be observed in normal subjects, as when performances on indirect tasks reveal the effects of prior events that are not remembered.

Basely, the memory, as that of enabling us to interpret the perceptual world and helping us to organize our responses to the challenges of change, that take place in the world. For both functions we have to accumulate experiences in a memory system in such a way as to enable the productive access of that experience at the appropriate times. The memory, then, can be seen as the repository of experience. Of course, beyond a certain age, we are able to use our memories in different ways, both to store information and to retrieve it. Language is vital in this respect and it might be argued that much of socialization and the whole of schooling are devoted to just such an extension of an evolutionary (relatively) straightforward system. It will follow that most of the operation of our memory system is preconscious. That is to say, consciousness only has access to the product of the memory processes and not to the processes themselves. The aspects of memory that we are conscious of can be seen as the final state in a complex and hidden set-class of operations.

How should we think about the structure of memory? The dominant metaphor is that of association. Words, ideas, and, emotions are seen as being linked together in an endless, shapeless, and formless entanglement. That is, the way our memory can appear to us if we attempt to reflect on it directly. However, it would be a mistake to dwell too much on the problems of consciousness and imagine that theory represent the inner sanctions of structure. For a cognitive psychologist interested in natural memory phenomena there were a number of reasons for bing deeply dissatisfied with theories based on associative set-classes with which are entangling nets. One ubiquitous class of memory failure seemed particularly troublesome. This is the experience of being able to recall a great deal of what we know able an individual other than their name. One such referent classification would entail, that ‘I know the face, but I just can’t place the name’, if someone else produced name we, may have, perhaps, been able to retrieve the rest of the information needed.

How might various theories of memory account for this phenomenon? First we can take an associative network approach, and the idealized associative network, concepts, such as the concept of a person, are represented as nodes, with associated nodes being connected through links. Generally speaking, the links define the nature of the relationship between nodes, e.g., the subject-predicate distinction. Suppose that the name of the person we are trying to recall is Bill Smith. We would have a Bill Smith node (or a node corresponding to Bill Smith) with all the available information concerning Bill Smith being linked to form some kind of propositional Smith’s name. Now, failure to retrieve Bill Smith’s name, while at the same time Bill Smith, would have to due to an inability to traverse the links to the Bill Smith node. However, this seems contradictory - content addressability. That is to say, given that any one constituent of a propositional representation can be accessed, the propositional node, and consequently all the other nodes link to it, should also be accessible. Thus, if we are able to recall where Bill Smith lives, where he works, whom he is married to, then, we should, in principle, be able to access the node representing his name. To account for the inability to do so, some sort of temporality ‘blocking’ of content addressability would seem to be needed. Alternatively, directionality of links would hae to be specified, though this would have to be done on a morally justified basis.

Next, we are to consider schema approaches. In that, schema models stipulate that there are abstract representations, i.e., schemata, in which all invariant information concerning any particular thing are represented. So that we would have a person schema for Bill Smith, that would contain all the invariant information about him. This would include his name, personality traits, attitudes, where he lived, whether he had a family, etc. It is not clear how one would deal with our example, least of mention, since some-one’s name is the quintessentially invariant property, then, given that it is known. It would have to be represented in the schema or out-line for that person. And, from our example, we knew that other invariant information, as well as variant, non-schematic information, e.g., the last talk he had given, were available for recall. This must be taken as evidence that the schema for Bill Smith was accessed. Why, then, were we unable to recall one particular piece of information that would have to be represented in the schema we clearly had access to? We would have to assume that within the person-schema or out-line for Bill Smith are sub-schema, one of which contained Bill Smith’s name, another containing the name of his wife, and so forth. We would further have to assume that access to the sub-schemata was independent and that, at the time in question, the one containing information about Bill Smith’s name was temporarily inaccessible. Unfortunately the concept of temporary inaccessibility is without precedent in schema theory and does not seem to be independently motivated.

Nonetheless, there are two other set-classes of memory problem that do not fit comfortably into the conventional frameworks. One is that of not being able to recall an event in spite of most detailed cues. This is commonly found when one partner is attempting to remind the other of a shared experience. Finally, we all have to experience of a memory being triggered spontaneously by something that was just an irrelevant part of the background for an event. Common triggers of such experiences are specific locales in town or country, scents and certain pieces of music.

What we learn from these kinds of events are that we need a model with which readily allows of their containing properties:

(1) Not all knowledge is directly retrievable;

(2) The central parts of an episode do not

necessarily cue recall of that episode;

(3) Peripheral cues, which are non-essential parts

of the contexts, can cue recall.

In response to these requirements, the frameworks of reference within which the model is couched is that of information processing. In trying to solve the problem, we first supposed, that memory consists of discrete units, or ‘records’, each containing information relevant to an ‘event’, an event being, for example, a person or a personal experience. Information contained in a record could take any number of forms, with no restrictions being placed on the way information is presented, on the amount being represented or on the number of records that could contain the same nominal information. Attached to each of these records would be some kind of access key. The function of this access key, is singular: It enables the retrieval of the record and nothing more. Only when the particular access key is used can the record, and the information contained therein, be retrieved. As with the record we felt that any type of information could be contained in the access key. However, two features would distinguish it from the record. First, the contents of the access key would be in a different form to that of the record, e.g., represented in a phonological or other central code. Second, the contents of the access key would not be retrievable.

The nature of the match required between the ‘description’ and a ‘head recordings’ will be a function of the type of information in the description. If the task is to find the definition of a word or information on a named individual then a precise match may be required at least for the verbal part of the description. We assume that the ‘head recordings’ are searched in parallel. On many occasions there will be more than one head recording that matches the description. However, we require that only one record be retrieved at a time. What is more, evidence in support of this assumption is summarized in Morton, Hammersley and Bekerian (1985). The data indicate that the more recent of two possibilities, in that records are retrieved. We conclude first that once a match is made the search process terminates and secondly, that the matching process is biassed in favour of the more recent of headings. There is, of course, no guarantee that the retrieved records will contain the information that is sought. The records my be incomplete or wrong. However, in such cases, or in the case that no record had been retrieved, there are two options: Either the search is continued or it is abandoned. If the search is to be continued then a new description will have to be formed, since searching again with the same description would result in the same outcome as before. Thus, there has to be a list of criteria upon which a new description can be based.

Retrieval depends on or upon a match between the description and the heading record. The relationship between the given cue and the description is open. It is clear that there needs to be a process of description formation which will pick out the most likely descriptors from the given cue. Clearly, for the search process to be rational the set of descriptors and the set-class of head recordings should overlap. The only reasonable state of affairs would be that the creation of head recordings and the creation of descriptions is the responsibility of the same mechanism.

By means of distributive normal forms, the Finnish philosopher Jaakko Hintikka (1929- ), wherein his early works ‘Distributive Normal Forms’ (1953) and ‘Forms and Content in Quantification Theory’ (in Two Papers on Symbolic Logic, 1955), Hintikka developed two logical theories which he has later applied to many different areas: The theory of distributive normal forms for quantification theory, and the theory of model sets which yields semantically motivated proof procedures for quantification theory and model logics.

Briefly, Hintikka has worked in a wide area, his work shows a great deal of conceptual and theoretical unity. This is partly due to the logical and semantical methods he uses, partly to the transcendental character (in the Kantian sense) of his philosophy. Hintikka has emphasized the role of rule-governed human activities in knowledge acquisition and in cognitive representation: His game-theoretical approaches to meaning is a case in point. The structures of such activities can be taken to provide the synthetic a priori features of our knowledge. In this respect Hintikka’s philosophy is Kantian in spirit.

However, since it cannot be the case of topic, in that all terms of a language are explicitly definable in that language - that would involve circularity - the most one could hope for from explicit definition between the distinction of theoretical and observational term s can sustain, at first glance, the prospects of finding explicit definitions for all theoretical terms appear inadequately depressed. Some theoretical terms - particularly those involved in functional identities - looks like the stenographical ‘product of mass and velocity’. But others are not explicitly definable, in the most fundamental scientific sense, is that, to define is to delimit. Thus, definitions serve to fix boundaries of phenomena or the range of applicability of terms or concepts. That whose range is to be delimited is called the ‘definiendum’, and that which delimits the ‘definiens’. Social science practices tend to focus on specifying application of concepts through ‘formal’ operational definitions. Philosophical discussions have concentrated almost exclusively on articulating ‘definitional forms’ for terms.

Definitions are ‘full’ if the ‘definiens’ completely delimits the ‘definiendum’, and ‘partly’ if it only brackets or circumscribes it. ‘Explicit definitions’ are full definitions where the ‘definintium’ and the ‘definiens’ are asserted to be equivalent. Theories or models which are so rich in structure that sub-portions are functionally equivalent to explicit definitions are said to provide ‘implicit definitions’. In formal context our basic understanding is provided by the ‘Beth definability theorem’, stating, not only provides fundamental understanding of explicit definition, but that relaxing conditions on explicit definition enable an understanding of Carnap’s notion of partial interpretation: whereas, explicit definitions fully specify (implicitly define) the referents of theoretical terms within intended models of the theory, creative partial definitions serve only to restrict the range of objects in intended models that could be the referents of theoretical terms.

As an individuation of theories, what determines whether theories T1 and T2 are instances of the same theory or distinct theories? By construing scientific theories as partially interpreted syntactical axioms systems TC, positivism made specific of the axiomatization individuating features of the theory. Thus, different choices of axioms T or alternations in the correspondence rules - say, to accommodate a new measurement procedure - resulted in a new scientific theory. Positivist’s also held th at axioms and corresponding implicitly defined the meaning of the theory’s description terms τ. Thus significant alterations in the axiomatization would result not only in a new theory T’C’ but one with changed meanings τ’. Kuhn and Feyerabend maintained that the resulting changes could make TC and T’C’ non-comparable, or ‘incommensurable’. Attempts to explore individuation issues for theories via meaning change or ‘incommensurability’, proved unsuccessful and have been largely abandoned.

Feyerabend’s differences with Kuhn are that first, that Feyerabend’s variety of incommensurability is more global, and cannot be localized in the vicinity of a single problematic term or even a cluster of terms, that Feyerabend holds that fundamental changes of theory lead to changes in the meaning of all the terms in a particular theory. The other significant difference concerns the reason for incommensurability. Whereas, Kuhn thinks that incommensurability stems from specific translational difficulties involving problematic terms, Feyerabend’s variety of incommensurability seems to result from a kind of extreme holism about the nature of meaning itself.

One significant point of agreement between Kuhn and Feyerabend is that neither thinks that incommensurability is incomparability, in that, both countenance, and indeed recommend, alternative modes of comparison. Feyerabend says, that ‘the use of incommensurable theories for the purpose of criticism must be based on methods which do not depend on the comparison of statements with identical constituents, such methods are readily available’. But, although, he mentions a number of methods, he does not explicate them in full. For example, he says that theories can be compared using the ‘pragmatic theory of observation’, according to which you attend to causes of the production of a certain observational sentence, than the meaning of that sentence. Further, he argues that we do not compare ,meanings: We investigate the conditions under which a structural similarity can be obtained’. Insisting that ‘there may be empirical evidence against [theory], and for another theory without any need for similarity of meaning. On a more sarcastic, though revealing, Feyerabend states: ‘Of course, some kind of comparison is always possible (for example, one physical theory may sound more melodious when read aloud to the accompaniment of a guitar than another physical theory). At any rate, he insists that ‘it is possible to use incommensurability theories for the purpose of mutual criticism,’ adding that this removes ‘one of the main ‘paradoxes’ of the approach,’ that he suggests: And finally, and uses the same analogy that Kuhn uses to explain a scientist’s ability to learn a new theory, that of a child learning a new language. Rather than translating between languages, ‘[W]e can learn a language or a culture from scratch, as a child learns them, without detour through our native tongue’.

Nevertheless, it is commonly supposed that definitions are analytic specifications of meaning. In some cases, such as stipulative definitions, this may be so, however, some philosophers allow specifications of meaning to be synthetic. Reduction sentences are often descriptions of measurement apparatus specifying empirical correlations between detector output readings and values for parameters. These are synthetic and are rarely mere specifications of meaning. The larger point is that specification of meaning is only one of many possible means for delimiting the ‘definiendum’. Specification of meaning seems tangential to the bulk of scientific definitional practice.

Definitions are said to be ‘creative’ if their addition to a theory expands its content, and non-creative, if they do not. More general, we can say that definitions are creative whenever the ‘definiens’ asserts contingent relations involving the ‘definiendum’. Thus definitions providing analytic specifications of meaning are non-creative. Most explicit definitions are non-creative, and hence, ‘eliminable’ from theories without loss of empirical content. One could realize the distinction so that definitions redundant of accepted theory or background belief in the scientific context are counted as non-creative. Either way, most other scientific expressions of empirical correlation. Thus, for purposes of philosophical analysis, suppositions that definitions are either non-creative or meaning specifications demand explicit justification. Much of the literature concerning incommensurability and meaning change in science turns on uncritical acceptances of such suppositions.

The issue of incommensurability remains a live one. It does not arise just for a logical empiricist account of scientific theories, but for any account that allows for the linguistic representation of theories. Discussions on linguistic meaning cannot be banished from the philosophical analysis itself, and its place is not about to be taken prominently in the daily work of science itself, and its place is not about to be taken over by any other representational medium. Therefore, the challenge facing anyone who holds that the scientific enterprise sometimes requires us to make a point-by-point linguistic comparison of rival theories is to respond to the specific semantic problems raised by Kuhn and Feyerabend. However, then the challenge is to articulate another way of putting scientific theories in the balance and weighing them against one another.

Such confusion abound in scientific and philosophical discourse of both ‘operational’ and ‘definitions’. The notion, first introduced by P.W. Bridgman (1938) with reference to non-creative explicit full definitions specifying meaning in terms of operation as preformed in the measurement process. Behaviourist social scientist’s expanded the notion to include creative partial definitions, and in practice most operational definitions can be cast as synthetic creative reduction sentences specifying empirical relations between measurement procedures and intervening variables or hypothetical or subject to social scientists respond that it’s just a matter of quibbling over semantics - a response appropriate to Bridgman’s sort of operational definitions but not to their own.

Many philosophers have been concerned with admissible ‘definitional forms’. Some require ‘real definitions’ - a form of explicit definition in which the ‘definiens’ equate the ‘definiendum’ with an essence specified as a conjunction Α1 ∧ . . . ∧ Αn of attributes. By contrast, ‘nomianl definitions’ use nonessential attributes. The ‘Aristotelian definitional form’ further requires that real definitions be hierarchical, where the species of a genus share Α1, . . . Αn-1, being differentiated only by the remaining essential attribute Αn. Such definitional forms are inadequate for evolving biological species whose essence may vary. ‘Disjunctive polytypic definitions’ allow changing essences by equating the ‘definiendum’ with a finite number of conjunctive essence. But future evolution may produce further new essences, so partially specified ‘potentially infinite disjunctive polytypic definitions’ were proposed. Such ‘explicit definitions’ fail to delimit the species, since they are incomplete. A superior alternative is to formulate reduction sentences for each reduction sentences for subsequently evolved essences.

Wittgenstein (1953) claimed that many natural kinds lack conjunctive essences: Rather, their members stand only in a family resemblance to each other, however, an important extension to the original theory of natural kinds provided by Putnam (1926- ) and Kripke (1940- ). These philosophers presented their account as applying to natural kind terms in ordinary language, than to terms of theoretical science. Typical examples are ‘water’, ‘gold’, and ‘lemon’. They claimed, on the basis of intuitive hypothetical cases, that the intention to refer to a natural kind determined by the possibly unknown real essence in part of a correct account of the normal use of these terms in ordinary language. If this is right, then it is certainly reasonable to extend the account to the technical uses of theoretical terms in science. If the account cannot be sustained for the case of ordinary language kind terms, appeal to it as an account of scientific terms will be more problematic.



We then have a conception of theory as essentially an embodiment of analogies, both formal and material, which describe regularities among the data of a given domain (models of data and phenomenal laws), with unalogies between these and models of data in other domains, and so on in a hierarchy of levels of a unifying theoretical system. The ‘meaning of theoretical terms’ is given by analogies with familiar natural processes (e.g., mechanical systems), or by hypothetical models (e.g., Bohr’s planetary atom). In either case, descriptive terms of the analogues are derived metaphorically.

Inn evaluating the Kripke-Putnam theory, the crucial point to note for present purposes is that it rests on a strong ontological presupposition: Contrary to Locke, a large class of ordinary language kind terms must actually pick out (more or less) the requisite sort of natural kind. (Unless, at any rate, the theory of natural kind is based on massive metaphysical delusion.) And, in addition, we must have some sense, in advance of scientific illumination of the real essence of a kind, whether that kind is a natural kind. Putnam claims explicitly, for example, that the stuff on Twin Earth would not have been water even if the explorers from Eartg had arrived before any means had been discovered for distinguishing H2O from XYZ. The intention to refer to the real nature preexists the characterization of that nature.

Suppe (1918) urged that natural kinds were constituted by a single kind-making attribute (e.g., being gold), and that which patterns of correlation might obtain between the kind-making attribute and other diagnostic characteristics is a factual matter. Thus issues of appropriate definitional form (e.g., explicit, polytypic, or cluster) are empirical, not philosophical questions.

Definitions of concepts are closely related to explications, where imprecise concepts (explicanda) are replaced by more precise ones (explicata). The explicandum and explicatum are never equivalent. In an adequate explication the explicatum will accommodate all clear-cut instances of the explicandum and exclude all clear-cut non-instances. The explicatum decides what to do with cases where application of the explicandum is problematic. Explications are neither real nor normative definitions and are generally creative. In many scientific cases, definitions function more as explications than as meaning specifications or real definitions.

What is still, to motive considerations throughout the prehensibility in study, has an existence or a place consist with an exact definitiveness for having distinct or certain limits, however, in later developments of Kuhn’s view, less emphasis is placed on what might be call ‘evaluative incommensurability’, and more of ‘linguistic incommensurability’. By 1983, Kuhn appeared to have moved away from evaluative incommensurability entirely, by saying that speaking of differences in ‘methods’ he states, that this version of the is the same as the ‘origin version’ of the incommensurability thesis, which he characterized as” ‘The claim that two theories are incommensurable is then the claim that there is no language, neutral or otherwise, into which both theories, conceived as sets of sentences, can be translated without residue or loss. Therefore, incommensurability equals untranslatability, what is it about scientific paradigms that preclude translation into a single common language, so that their claims can be set side by side and their points of agreement and disagreement isolated?

Meanwhile, it is, nonetheless, characteristic of intuition and Russell’s philosophy by ‘acquaintance’, that, at least of the philosophers who objected to direct realism, which states that certain familiar facts about illusion disprove the theory of perception, however, many versions of the argument which must be distinguished carefully, as some of these distinctions centre on the content of the premises (the kind of the appeal to illusion): Others centre on the interpretation of the conclusion.

A scratchy or crude statement of direct realism might cause to be heard as in perception, we sometimes directly perceive physical objects and their properties: We do not always perceive physical objects by perceiving something else, e.g., a sense-data, with which is given by the senses. There are, however, difficulties with this formulation of the view. For one thing a great many philosophers who are not direct realists would admit that it is a mistake to describe people as actually perceiving something other than a physical object. In particular, such philosophers might admit, we should never say that we perceive sense-data. To reveal information that way would be to suppose that we should model our understanding of our relationship to sense-data on our understanding of the ordinary use of perceptual verbs as they describe our relation to the physical world, and that is the last thing paradigm sense-datum theories should want. At least many of the philosophers who objected to direct realism would prefer to express what they were objecting to in terms of a technical (and, philosophically controversial) concept such as ‘acquaintance’. Using such a notion we could define direct realism this way: In veridical experience we are directly acquainted with parts, e.g., as, surfaces or constituents of physical objects. A less cautious version of the view might drop the reference to veridical experience and claim simply that in all experience we are directly acquainted with parts or constituents of physical objects.

Complex expressions, about the possibility of such a frame of reference we need to bear in mind that within the limited sense that their can be of either ‘reporting’ or ‘instituting’ equivalences among verbal or symbolic expressions, in form, definitions and either explicit or implicit.

A definition that institutive explains how an expression will be used henceforth. A definition that reports or gives to account of how an expression has been used, as an explicit definition explains, by means of words given in use, how an expression given in mention has been or will be used, for instance, The words ‘the cat’ in mention, ‘The cat’ is on the mat: The words ‘the cat’ in use: The cat is on the mat. An explicit definition explains how an expression has been or will be used by using it, and, usually in conjunction with the use of other expressions.

Dictionary definitions are reportive and explicit. Symbols introduced in technical writings are usually institutive and explicit, ads when a word is leaned in the context of its use, that context,. In effect, provides a reportive, implicit definition. Formal axiomatic systems, in which the meaning of each expression is gathered from its formal-logical relationship with the other expressions, provide institutive, implicit definitions.

This same course of thought seemingly makes Plato suggest that it is possible to have knowledge only about Forms, and that knowledge about sensible objects is impossible. Moreover, he also seems to hold sometimes that we cannot have about Forms that kind of cognition, belief or opinion, that we do have of sensibles. Yet, he allows that it is possible to make mistakes about Forms, and also to be a cognitive state as assembled of Form, that seems indistinguishable from what he seems obliged to call false belief or opinion. This idea, too, requires some kind of further explanation of the distinction between Forms and sensibles - a requirement that Plato seems to see some difficulty in satisfying.

Although in this phase of his work Plato concentrates on constructing a metaphysics that will make room for the possibility of knowledge, he does, however, at the same time pay some attention to the problems that are characteristic of the first phase of his epistemology. In the ‘Meno’, the ‘Phaedo’ and the ‘Republic’, he develops what has been called the ‘method of hypothesis’, which seems to be demonstrated unconditionally. In the ‘Meno’ and the ‘Phaedo’, he indicates that hypotheses are to be accepted only provisionally and not regarded as certain or unrevisable. In the ‘Republic’, however, he seems to maintain that one an somehow reach an ‘unhypothesized’ principle which will somehow serve as the basis for demonstrating everything hitherto accepted merely hypothetically. He apparently implies that what is demonstrated thereby will have to do only with Forms. He also makes a suggestion, not clearly explained, that this ‘principle’ has something to do with the Form of the Good. There is no general accepted interpretation of what Plato says, however, it seems to indicate that he accepted or was seriously considering some kind of ‘foundationalist epistemology’ position, which would start from some unshakable principle and derive from it the rest of what there is to be known about Forms. (As often, however, Plato seems to waver between thinking of the principle and what is to be derived as possessing propositional structure and treating them as non-propositionally structured objects.)

This method of hypothesis is earlier offered as something that is used by ‘dialectic’, the style of philosophizing that takes place though conversational questions and answers. This justifies introducing a more sophisticated concept to account for the domain, on which the analysis is repeated. The result of dialectic analysis is an integrated network of concepts which specified the proper domain of each and which preserves the legitimate content of earlier concepts in the final, most comprehensive and adequate concept.

No comments:

Post a Comment