Time Preference and Net Consumption

Adapted from a Swedish blog post.

Everything else equal, poor people have a higher time preference and – which is to say the same thing – a lower degree of future orientation than rich people. Take a homeless person, for example: he has to try to survive the day or the week; he is not in a position to set money aside for long-range projects or for his retirement. At the other end of the spectrum, take a multi-billionaire such as Bill Gates or George Soros: he does not have to worry about surviving the next day, week, month, year or even decade; he can plan ahead for the future without having to concern himself too much with the present. He can even plan ahead for the time after his death and for securing the future of his children and grandchildren.

In between there are the rest of us: people with a moderate or fairly high income. We are in a position to set some of our money aside for the future: for buying a new house or a new car, providing for our children’s education, planning vacations, providing for our retirement.

But everything else is not always equal, so there are exceptions. A poor person may be struggling hard to get out of his poverty; and a very rich person may be squandering his wealth and end up poor.

If you are familiar with The Fountainhead, you may remember that Gail Wynand was sleeping on a couch in his office while building up The Banner and only later used his money to buy a yacht, create an art gallery, and commission a house from Howard Roark. – And for an example of rich people squandering their wealth, read Bernard de Mandeville’s The Fable of the Bees[1].

A change in the time preference of very poor people does little for the economy as a whole. Neither does such a change in the time preference of the few “squandering rich”. It is the time preference of the well-to-do and the industrious rich that makes a difference. As long as those people have a low time preference and a correspondingly high degree of future orientation, they will invest their money, and it is those investments that move the economy forwards.

According to George Reisman’s theory, the level of profit in the economy as a whole is equal to the net consumption of the capitalists (I leave net investment aside, because I don’t think it changes my point). As long as the capitalists have a low time preference, net consumption stay at this low level; the greater part of their wealth goes to productive investments. And the richer they become, the lower becomes their time preference, the more gets invested, the more gets produced, the more workers get employed and the higher their wages become.

But assume that the capitalists’ time preference would increase (and their future orientation would correspondingly diminish); this could happen if there were to be a serious threat of confiscation of their wealth by a socialist government (or if there were certain indications that doomsday was approaching and the world would come to an end). Then the opposite would happen: they would consume their wealth instead of investing it; production would diminish or cease altogether; unemployment would rise; and so would the general level of profit and interest.

And this is why time preference is not a direct but an indirect cause of the level of profit and interest. It works through the net consumption of the capitalists.

$ $ $

The honor of having discovered the role of time preference goes to Eugen von Böhm-Bawerk. Later “Austrian” economists, such as Mises, have considered his explanation of the causes of time preference as not quite satisfactory. But the one who nails it is, once again, George Reisman:

The nature of human life implies time preference, because life cannot be interrupted. To be alive two years from now, one must be alive one year from now. To be alive tomorrow, one must be alive today. Whatever value or importance one attaches to being alive in the present, because being alive in the present is the indispensable precondition to being alive in the future. The value of life in the present thus carries with it whatever value one attaches to life in the future, plus whatever value one attaches to life in the present for its own sake. In the nature of being alive, it is thus more important to be alive now than at any other, succeeding time, and more important to be alive in each moment of the nearer future than in each moment of the more remote future. If, for example, a person can project being alive for the next thirty years, say, then the value he attaches to being alive in the coming years carries with it whatever value he attaches to being alive in the following twenty-nine years, plus whatever value he attaches in the coming year for its own sake. This is necessarily a greater value than he attaches to being alive in the year starting next year. Similarly, the value he attaches to being alive from next year on is greater than the value he attaches to being alive starting two years from now, for it subsumes the latter value and represents that of an additional year besides.

The greater importance of life in the nearer future is what underlies the greater importance of goods in the nearer future and the perspective-like diminution in the value we attach to goods available in successively more remote periods of the future. (Capitalism: A Treatise on Economics, p. 56.)

To put it in shorter words: To be alive today and this year is the necessary pre-condition of being alive tomorrow or in fifty or a hundred years. Everything else equal, we have to value life in the present over life in the future, for if we don’t, there will be no life in the future. Thus we have to have goods or money to survive the day before we can start thinking about saving for the future.

$ $ $

I originally wrote this some years ago, when I was pulled into a discussion with an idiot not too well-informed person, who claimed that George Reisman could not be a real “Austrian”, since he does not share the conventional “Austrian” view om time preference.

(Other schools than the “Austrian” have no inkling of the role of time preference.)


[1]) Mandeville claimed that this squandering would be a boon to the economy; but this is simply a version of the “broken windows” fallacy and has been refuted time and again by better economists.

The Choice to Live

My morality, the morality of reason, is contained in a single axiom: existence exists – and in a single choice: to live. The rest proceeds from these. – Galt´s speech.

An objection to this is that one does not explicitly choose to live. We do not choose to be born; that choice was made by our parents (and their ancestors before them). Before we were born, we had no choice about anything.

The only situation I can think where one explicitly chooses to live is if one is seriously considering suicide and then decides against it. But this cannot be what Galt means. It is unimaginable that Galt, or any other Ayn Rand hero or, for that matter, most of the rest of humanity, does this.

My conclusion is that the “choice to live” is an implicit choice: it is implied in all (or most) other choices we make. We make pro-survival choices – and only the suicide candidate (or the mystic, whose standard is death, not life) makes anti-survival choices.

It seems that few, if any, have raised this objection – for the only one I know of who has brought it up and answered it is Tara Smith in Viable Values, who writes:

Admittedly, the embrace of life is not usually crystallized in an unmistakable, do-or-die moment when well-defined options are laid out and a decision is imperative. […]

Rather, we choose life by choosing all sorts of specific things that constitute and further our lives. In embracing countless people, projects, objects and destinations – in loving Megan, saving money, buying coffee, studying French, playing jazz, having a child, building a career, planning a vacation, or planting a garden – a person may be choosing life. By getting out of bed in the morning and having at a day, a person may be choosing life. In setting any life-enhancing aims for himself, be they modest or ambitious, trivial or profound, short or long range, a person may be choosing life. Remember that life consists of a person’s activities, all that he does in pursuing his various ends. Thus, life is not a distinct aim that one can adopt in addition to learning French, saving money, building a career, and so on. To embrace life is to embrace the condition of having specific ends (and more, of having consistent and life-furthering ends). – Viable Values, p. 105.

Which is to say that this choice is implicit rather than explicit.

$ $ $

On a similar note: Some years ago, there was a person here in Sweden who jumped into every discussion about Objectivism and pestered us with the idea that “life as the standard” means that the goal of the Objectivist ethics is to live as long a life as possible. No matter what I or anybody else answered, he stuck to this idea and repeated it over and over again.

We could answer that there is “quality of life” as well as “quantity of life” – that

it is not the years in one’s life that count, it’s the life in one’s years.

We could quote the following (from “The Objectivist Ethics”):

Such is the meaning of the definition: that which is required for man’s survival qua man. It does not mean a momentary or a merely physical survival. It does not mean the momentary physical survival of a mindless brute, waiting for another brute to crush his skull. It does not mean the momentary physical survival of a crawling aggregate of muscles who is willing to accept any terms, obey any thug and surrender any values, for the sake of what is known as “survival at any price”, which may or may not last a week or a year. “Man’s survival qua man” means the terms, methods, conditions and goals required for the survival of a rational being through the whole of his lifespan – in all those aspects of existence which are open to his choice.

None of this helped. (And since Ayn Rand speaks out against merely momentary survival, What else than “longevity” could be implied? The rest of the paragraph gets lost in such a person’s mind.)

If I said that an implication of this “longevity” idea is that Bertrand Russel must be exactly twice as good as Thomas Aquinas, since he lived till the age of 98, while Aquinas only lived till the age of 49 – he would of course have accepted it.

$ $ $

In getting this off my chest, I have y chosen life – albeit implicitly.

$ $ $

Addendum: On Facebook, I got this comment.

Moral issues ought to be something more concrete.

Well, “life” is an extremely abstract concept, since it subsumes − how should I pot it? − a vast number of concretes. For examples, it subsumes every living organism that lives now, has ever lived, and will ever live. It subsumes the lifespans of every man, and every organism, that lives now, has ever lived, and will ever live. And you may certainly think on other things, as well.

“Choice” also subsumes every choice that is made, has ever been made, and will ever be made. But is is far easier to form this concept, since it only requires a simple act of introspection. If you have ever made a choice or a decision, you know what a choice or decision is.

A good thing about the quote from Tara Smith above is that she gives a few concrete example of what this “choice to live” implies. But such a list could never be made exhaustive, since it would then list every choice that has been made or even could be made.

Take the choice or decision to get out of bed in the morning (or afternoon, as the case may be). One then has to decide to put on one’s clothes, brew some coffee, make a sandwich, go to the bathroom to take a leak, getting off to work, etc., etc.

Most of those choices/decisions are so self-evident that we hardly think of them as choices; they are automatized. It is only if one is very sleepy that one would regard getting out of bed as a choice that require some will-power.

The “choice to live”, most often, is not experienced as much of a choice. That we want to live, we simply take for granted  unless we are extremely disappointed with life or tired of life.

Now, I will make the life-enhancing decision to stop blogging about this and prepare today’s dinner. ;-)

$ $ $

Update November 16: Come to think of it, the word ”choice” is equivocal: it may refer to ”the act of choosing”, but also to ”the thing chosen”, ”the result of the choice”.

Let’s say, to give an example, that there are two women I want to marry. I’m attracted to both of them – even in love with both of them − and both of them are willing to marry me. But, marriage laws being what they are, I cannot marry both of them; I have to choose between the alternative possible wives. But after I have made the choice, I can say: “She was my choice”, and others can say “She was his choice”.

Or take the situation I was in right before writing this down: Should I bother to publish this now? Or should I wait till later on? Or is it too unimportant to even mention it? But now I have made my choice: publish it.

The failure to make this distinction might be one reason why discussions about “free will versus determinism” seldom lead anywhere. I, as a free will advocate, will insist that the choice is actually a choice and that to say it is determined is nonsensical and a contradiction in terms. And the determinist will insist the thing chosen, the result of the choice, as determined by everything that has happened in the past. The point the determinist is missing here is that one determining factor is precisely my act of choosing.

A Review of George Reisman’s “Capitalism: A Treatise on Economics”

This is a slightly expanded version of a review I submitted to Amazon a few years ago. This review seems to be appreciated by readers, and it was much appreciated by Dr. Reisman himself, who suggested I make this expanded version for possible publication. (This version was originally written in 2999.)

George Reisman’s Capitalism: A Treatise on Economics is perhaps the greatest treatise on economics of all time; it certainly ranks with such works as Adam Smith’s The Wealth of Nations or Ludwig von Mises’ Human Action; and in one respect I think it surpasses them: even the great pro-capitalist economists in the past have had contradictions and/or inconsistencies in their reasoning that undercut their message and make it weaker than it could and should be. If there are contradictions or inconsistencies in Reisman’s treatise, I have yet to find them.

An achievement of this kind is always an integrated whole. But if I were to single out one insight as the greatest one, it would be the “primacy of profits” principle, the insight that wages are a deduction from profits, not vice versa. This lays the ground for the most thorough and fundamental refutation of the Marxian exploitation theory that is possible; it also lays the ground for what actually constitutes economy-wide profit (the “net consumption” theory of profits) and the actual relationships between profits, wages and investment, and for many other things as well. To make a comparison, I think this discovery ranks with Adam Smith’s original discovery of the principle of division of labor, or the early Austrians’ discovery of marginal utility. I sincerely hope that this principle gets thoroughly understood by economists in the future.

Some other highlights I could mention merely because I have not seen them mentioned by other reviewers:

The demonstration that the rise in the average standard of living rests entirely on lower prices for goods and services. This fact is obscured by the presence of inflation, and other economists (notably the Keynesians) have managed to create a lot of fog around this issue. Reisman’s analysis completely dissolves the fog. And this point also has a positive corollary. The only thing that actually does raise the average standard of living is a rise in the productivity of labor; behind such a rise stand saving, technological progress and capital accumulation; and behind these stands man’s reasoning mind.

Understanding the extent of the gulf between a pre-capitalist, non-division of labor society and a modern division of labor society. (E.g.: understanding why a rise in population would be a threat in the former kind of society, but a source of great benefit in the latter kind.)

The demonstration that one of the things capitalism is regularly denounced for – the concentration of great fortunes in relatively few hands – is actually to the benefit of everybody, not merely the owners of those fortunes.

The demonstration of what is wrong with modern “national income accounting”. To make a long story short, the “modern” accounting method makes it look like almost all expenditure in the economy is consumption expenditure, while the truth is that most expenditure in a modern advanced economy is expenditure for the sake of further production.

And those are just a few of the highlights.

Capitalism is not always easy reading, and a beginner would be well advised to start with The Government Against the Economy (the whole of this book, however, is incorporated into Capitalism as chapters 6–8), or with some of Reisman’s shorter pamphlets (or with one of Reisman’s own favorites, Henry Hazlitt’s Economics in One Lesson). Some previous knowledge of Classical and Austrian economics is a great help. But, particularly in the first chapters, dealing with the role of material wealth in man’s life, there are passages that made me cheer aloud when I first read them, and possibly others will cheer aloud, too. (One such observation is that we value automobiles and other means of transportation for basically the same reason that we value having legs over not having legs.)

As is probably known, George Reisman was not only a student of Ludwig von Mises but also a student of Ayn Rand, and her influence permeates his book in more ways than I have space to tell. You may recall that one of the strikers in Atlas Shrugged was “a professor of economics who couldn’t get a job outside, because he taught that you can’t consume more than you have produced”. Well, this is what George Reisman teaches, for a thousand double-column pages and better than anyone has done before him.

PS: Reisman’s words of appreciation are worth quoting:

I believe that my treatment of the subject of profit is the most important and original fea­ture of the book and that the reversal of the Marxist view of the relationship between pro­fits and wages is one of the most important appli­cations of my theory of profit. Those are pre­cisely the points your review stresses. So I come away from your review with the very gratifying feeling that here at last is someone who really understands the book and has hit the nail on the head in reviewing it.

PS. Here is my original Amazon review. (It was published in 1999 under my own name, and I don’t know why it has been changed to “A customer”.)

You may also read my review of The Government Against the Economy.

Two Observations on Definitions

Adapted from a Swedish blog post.

The purpose of a definition is to distinguish a concept from all other concepts and thus to keep its units differentiated from all other existents. – Ayn Rand, Introduction to Objectivist Epistemology, the first page of the chapter “Definitions”.

There are cases where correct definitions are extremely important and wrong definitions create havoc. There is for example all the difference in the world whether one defines “capitalism” as “a social system based on the private ownership of the means of production” or “a social system based on exploitation of the working people”. Another example is the concepts “inflation” and “deflation”; if one defines those concepts as “rising/falling prices” rather than “expansion/contraction of the money supply”, one gets completely wrong. [1]

And if you define “selfishness” as “trampling on other people” rather than “concern with one’s own interests”, you of course get completely wrong (it implies that there is no other way of concerning yourself with your own interests than precisely trampling on other people, i.e. that “man is man’s wolf”[2]).

And if one introduces a new concept, it is of course important that one defines if, so that people know what one is talking about.

But does this mean we have to define every single word we use or every single concept they stand for? If so, we would never have time for anything else!

Words/concepts on the lowest level of abstraction – i.e. those that stand for concrete, observable things, attributes/properties and relationships – are formed ostensively: one simply point at a table or something blue or something standing on a table or under or beside a table (for special relationships), and that is enough. They can be defined (as Ayn Rand does with “table” in her book), but it is not necessary and would be a waste of time, if one does it for every word on this level of abstraction.

The moral of this is: Don’t belabor your brain with “definition exercises” – except when it is necessary!

$ $ $

A definition consists of a genus and a species. What differentiates one species from others under the same genius is called “differentia”. A genus may be on a lower or a higher level of abstraction. Genus for “table”, for example, is “piece of furniture”, and then “man-made object”, and finally “object” or “entity” in general. Genus for “dog” is “animal” and then “living organism” (or just “organism”, since all organisms are alive until they die) and finally “entity” again. (One could put “mammal” between “dog” and “animal”, but this is a concept a child does not form until it has learned some biology in school.) Genus for “blue” is “color”, and genus for “color” is “attribute” or “property”.

Now observe one thing: Genus in a definition is always, grammatically, a noun. The species also is often a noun, but also often an adjective (e.g. “blue” and all the other colors, while “color” is a noun). No problems this far.

But what about other parts of speech?

Take interjections – such “oh!”, “ouch!”, “hooray!”, “damn it!”, etc. What is the genus of those words and expressions? There is no interjection that is more abstract than other interjections. And what do they actually mean? “Oh!” expresses surprise, “ouch!” expresses pain or displeasure, “”hooray!” expresses pleasure or approval. To that extent they perform the same function as all other concepts: they condense information. But a definition in terms of genus, species and differentia cannot be given. (And what measurements are omitted? Well, the degree of pain or pleasure/displeasure and approval/disapproval.)

Someone will of course object and say that one may define those words precisely as interjections. But the word “interjection” is not an interjection; it is a noun! So this “definition” is not a definition, but a description of the grammatical function of the word/concept.

Adverbs I have written about before, but let me say something about them again. An adverb does not have another adverb as its genus. Take those small words that we fill out our language with, both in speech and in writing, such as “well”(or “why” in certain expressions, such as “Why, this was odd”)[3]. What on earth[4] would be a more abstract adverb that subsumes them? And if one defines them as “adverbs”, this is merely a “grammatical definition”; one does not define them, but gives a description of their grammatical function.

The best definition of “adverbs” I can come up with is that they are modifiers of qualifiers: they modify or qualify another word, a clause or a sentence. (The same is true of adverbial phrases.)

The words “yes” and “no” are sometimes classified as adverbs, sometimes as interjections, and sometimes as either-or, depending on context. (This seems to depend on what dictionary one is consulting.) I personally would reject calling them adverbs, since they do not modify or qualify anything; they merely confirm or deny something someone has said or written. The best definition was given in a Swedish grammar book, which classifies them as a sub-group under interjections and calls them “answering words”. But the point here is that there is no “answering word” that is more abstract and may serve as genus; the definition “answering word” is merely a description of a grammatical function.

The same is true of other parts of speech. The is no more abstract preposition to subsume other prepositions, no more abstract conjunction to subsume other conjunctions; but their grammatical function is easy to describe.

Verbs can actually be defined in terms of more abstract verbs. The genus for “walk”, “stroll”, “jump”, “gallop”, etc. may be “move”: the genus of “stand”; “sit”, “lie”, etc. could be “be in a certain position” (although there seems to be no single word for this concept).

Bur then there are auxiliary verbs. Examples are “do/does/did”, “have/had/had” “will/would”, “shall/should”, “may/might” and many more. Those words/concepts again can only be “defined” by describing their grammatical function.

And there are such words as the infinitive marker “to” and the word “it” in phrases like “It is raining (or snowing or whatever). What is this “it” that is raining of snowing?

Don’t wreck your brains too much over this issue! It is enough that I have wrecked my own brain over it.

$ $ $

PS November 15: There is a simpler way to explain all of this:

There is a difference between defining the thing and defining the word. The thing we call “table” is defined by giving the common, essential characteristics: a flat surface, one or more supports, the function (to put other, smaller objects on it) and omit non-essentials (such as whether the table is made of wood or some other material); and it is easy to state its genus, “furniture” or “piece of furniture”. But the word “table” is defined as a noun, and the wider concept subsuming nouns is “part of speech”. And a part of speech certainly isn’t a piece of furniture, nor vice versa. We may then subdivide “noun” into e.g. countable or uncountable and say that “table” is a countable noun, as opposed to e.g. “water” or “money” or, for that matter “furniture”. We may talk about its syntactic function; it may appear as either the subject or the object in a clause or sentence. Adjectives and pronouns can also be analyzed this way.

But with adverbs, conjunctions and interjections we cannot give a definition in terms of genus and differentia – for there is no thing¸ no object or entity, or any attribute, quality or property, to define. But the words can always be defined in terms of parts of speech.

What about prepositions? The simplest propositions (such as “in”, “on”, “over”, “under”, “above”, “below”, “before”, “after”) can be defined ostensively, i.e. by pointing to the relationships they stand for (like pointing to a book that is on the table, or an event that happens immediately before or after another event); but it is impossible to find another preposition that stands as the genus of those ostensive definitions. But it is very easy to define the words as “prepositions”.[5]

I have not mentioned numerals before. When we have learned the first numbers and grasped the principle of how they are formed (e.g., that “twenty one” stands for “20+1”), we understand all numbers. When we now that “a hundred” or “a thousand” stands for groups of 1000 or 1 000 of objects or other phenomena, we have no problem grasping what for example “one hundred and twenty million five hundred thousand two hundred and twenty three”. So there is no scarcity of referents in reality. But in grammar we define as “numerals”.

What I could add is that it is only the very first numerals that we can grasp and define ostensively. We can see, without having to count, that we have five fingers on each hand. If we had ten fingers on each hand, it would be much harder, maybe impossible, to see it without counting. And if we look at both hands, we do not see “ten”; we see “twice five”. (How many one can see without counting probably varies from person to person, but it can hardly be many more than five.) – Ayn Rand exemplifies this in ITOE:

… project the state of your consciousness, if I … proceed to give you [a] sum by means of perceptual units, thus: ||||||||| … etc.

Enough grammar for now!

[1]) In this case, the correct definitions go to the root cause of those phenomena, while the incorrect one only names one of the consequences of the expansion or contraction of the money supply. It is an example of “definition by non-essentials”.

To define “deflation” in terms of just “falling prices is actually even worse than defining “inflation” in terms of just “rising prices and wages”, since it leads one to confuse falling prices due to increased production (a god thing) with falling prices due to a sudden contraction in the money supply (a very bad thing). See on this this essay by George Reisman.

[2]) ”homo homini lupus” in Latin.

[3]) Those “small words” differ a lot from language to language. Some of the examples I gave in my Swedish blog post have no exact counterpart in English. And there are many such “small words” in ancient Greek (probably in modern Greek, as well). I took some ancient Greek in school, and we were advised to simply skip those words when translating.

[4]) This is another example of how different languages may differ. In my Swedish blog post, I wrote “what seventeen?” – which would be completely incomprehensible in English. We take the numeral 17 and make an adverb out of it.

[5] Possibly, one might use the prepositional phrase ”in relation to” as the genus of prepositions.

What Comes First, the Concept or the Word?

Words stand for concepts. Typically, nouns stand for entities or things; adjectives for attributes or properties; numerals (of course) for numbers (ordinal or cardinal), verbs for motions, actions or states, prepositions for relationships.

Often, a noun also stands for an attribute, but then it is typically formed from an adjective – e.g., “length” from “long”, “breadth” from “broad”, “happiness” from “happy”, etc.

Pronouns are replacement words, replacing either a noun or an adjective. For example, if I say “he”, it stands for the person I am talking about, etc.

Some verbs are auxiliary – like “do” in “do you agree?” or “I do not think so”, or “have” in “what have you done?” or “what has happened?”. In this case, the verb has only a grammatical function.[1]

Adverbs, I would say, stand for modifications, qualifications or specifications – for example, the word “typically” above, which modifies the thoughts I was expressing. Or the difference between “I stand” and “I stand here”, “It happens” and “It happens now”, which specifies the standing and the happening. (There may be some better way to describe adverbs, but this is the best I can think of for the moment.)

And many adverbs are formed from adjectives, like “typically” and “happily”, etc.[2]

Conjunctions are concepts of relationships among thoughts (here, I merely quote Ayn Rand’s definition).

What about interjections, such as “ouch” or “hooray”? What do they stand for? My best guess is that they stand for some kind of evaluation. We say “ouch” to something we don’t like, and “hooray” to something we like quite a lot.

Of course, this was an extremely rudimentary grammar lesson.

In what order does a child form those concepts?

I think it is obvious that concepts of entities (represented by nouns) come first; then probably concepts of attributes (represented by adjectives) and concepts of motions or states (represented by verbs). Certainly, they come before pronouns. A young child beginning to speak does not refer to him- or herself as “I”, it uses its name. “I”, meaning “the person speaking”, and “you”, meaning “the person spoken to” are really a fairly high level of abstraction. Yet, it does not take long for a child to progress from using its name to saying “I”.

Now to the question in the blog post title.

Ayn Rand’s idea (if I have understood it correctly) is that a child perceives two of more concretes (two or more tables, two or more dogs or whatever), notices that they are similar and that they are different from other concretes, and then forms the concept “table, “dog” or whatever. But in order to retain the concept, the child has to choose a word to denote the concept. Thus, the child forms the concept and then, to complete the process, makes up a word for the concept.

But does it make up the word?

Then how come that a child born into an English speaking environment invariably choose the words “table” and “dog” for those concepts, while I, who was born In Sweden, chooses the words “bord” and “hund”, and a French child chooses “chien” for “dog”? (The French word for “table” is the same, although pronounced differently.) A German child says “Hund” for “dog”, and “Tafel” for “table”.

Now, I must have misrepresented Ayn Rand when saying the child “makes up” the words, because it clearly does not. It uses the words already existing in its own language.

But then one may still ask the question whether the concept actually comes before the word or after. Does the child form a concept “table” (for example) and then asks its parents “What is this called?”? Or does it hear the word “table” uttered by some parent or grown-up and then figure out what it stands for?

Well, both those possibilities are possible.

A child learns his first language partly by imitation. But it is not passive imitation. If it were, the child would be a parrot, not a human.

Now I will quote Ayn Rand:

Even though a child does not have to perform the feat of genius performed by some mind or minds in the pre-historical infancy of the human race: the invention of language – every child has to perform independently the feat of grasping the nature of language, the process of symbolizing concepts by means of words.

This is true for both the possibilities mentioned above.

Why do I bother to think about this?

Well, I do not remember the period in my childhood when I learned to speak (I do not think anyone remembers that far back). Thus, I do not remember forming any concepts or choosing words to symbolize them. I did not think I formed any concepts on my own; I thought I was merely taking over (in some second-hand fashion) concepts that had already been formed by others. And I did not think other people are much different from me in this regard. So I drew the conclusion that children do not actually form concepts; they are merely taking over concepts already formed. They do, however, have to grasp those concepts independently. And this grasping of concepts has to be done by the same process as originally forming them.

But I think the Ayn Rand quote above nails the issue.

And somebody must have been the first man (or woman) to form the first concept and give it a name. Somebody must have been the first to put a simple sentence together. Somebody must have been the first to add a subordinate clause to a sentence.

But how this came about, we can only speculate. It is peering into the pre-historical past.

$ $ $

Another observation is how extremely fast a toddler learns his first language. I have blogged bout this too, but in Swedish.

[1]) There are other words that have only a grammatical function. One example is “to” in “to be”, “to talk”, “to run”, etc. It is an “infinitive marker” and only tells that the next word is the infinitive form of the word. I may write about such words later on.

[2]) On adverbs, see also What did Ayn Rand Know about Adverbs?

Irrational facts?

I suppose you all know what an irrational number is. And I trust you don’t take the existence of such numbers as an assault on rationality or an injunction against using reason when dealing with mathematics.

Now I came across this in Ludwig von Mises’ Theory and history:

The human search for knowledge cannot go on endlessly. Inevitably, sooner or later, it will reach a point beyond which it cannot proceed. It will then be faced with an ultimate given, a datum that man’s reason cannot trace back to other data. In the course of the evolution of knowledge science has succeeded in tracing back to other data some things and events which previously had been viewed as ultimate. We may expect that this will also occur in the future. But there will always remain something that is for the human mind an ultimate given, unanalyzable and irreducible. Human reason cannot even conceive a kind of knowledge that would not encounter such an insurmountable obstacle. There is for man no such thing as omniscience. […] It is customary, although not very expedient, to call the mental process by means of which a datum is traced back to other data rational. Then an ultimate datum is called irrational. No historical research can be thought of that would not ultimately meet such irrational facts. (P. 183f; italics mine.)

Now wait a minute. Facts are neither rational nor irrational. They are just facts. The terms “rational” and “irrational” pertain to what we do in our minds with the facts. It is a misnomer and an equivocation to call the facts rational or irrational.

Or does he mean that reason cannot deal with those “ultimate givens”, just because it cannot trace them back to something even more ultimate? But this is ridiculous. If reason encounters an ultimate given that cannot be traced further back, it simply accepts it as an ultimate given. There is nothing irrational about that.

Apart from this objection (and some others I may come to think of later),Theory and History is a book I heartily recommend.

Irrational ends?

(Added May 16.)

The following quote is more troublesome:

All ultimate ends aimed at by men are beyond the criticism of reason. Judgments of value can be neither justified nor refuted by reasoning. The terms “reasoning” and “rationality” always refer only to the suitability of means chosen for attaining ultimate ends. The choice of ultimate ends in this sense is always irrational. (P. 167.)

And if you know your Mises, you know that this idea is repeated over and over in his works.

Obviously, Mises never considered Ayn Rand’s explanation of the link between “life” and “value” (or if he did, he might have considered it “irrational” and “beyond reason”).

But her derivation is fact-based. To take some high-lights: That living organisms require a specific course of action to remain alive is a fact. For lower organisms this is automatic, but for man it involves deliberation and choice, and that is a fact. And it is a fact in the sense of an “ultimate given”, because it can hardly be traced back to even more basic facts. To choose life, and the preservation and enhancement of one’s life, is certainly the rational thing to do.

A couple of pages later Mises writes:

… there is a far-reaching unanimity among people with regard to the choice of ultimate ends. With almost negligible exceptions, all people want to preserve their lives and health and improve the material conditions of their existence. (P. 269f.)

True enough. Very few people, I would venture to guess, deliberately act to harm their lives, their health, their well-being. There are exceptions, but most people, when they harm themselves, do it because of some error in their reasoning. They find the wrong means, means not suitable the end sought, to use Mises’ way of expressing it.

But an appeal to majority is not a good argument. Majorities are sometimes wrong. And on Mises’ own reasoning and with his terminology, the majority here is as irrational as the small minority that does not take life and health as their ultimate goal.

There is a similar quote in the very beginning of the book:

Judgments of value […] express feelings, tastes or preferences of the individual who utters them. With regard to them there cannot be any question of truth and falsity. They are ultimate and not subject to any proof or evidence. (P. 19.)

That values or value judgments have no “truth value” and are just expressions of feelings or tastes is something we are taught by virtually every philosopher who is not an Objectivist. It is as common and ubiquitous as the closely connected idea that one cannot (and must not) try to derive an “ought” from an “is” – and as wrong.

Mises uses the example of someone preferring Beethoven to Lehar (or vice versa). This is a value judgment. The person who says it is saying that Beethoven, to him, is a higher value than Lehar (or vice versa). And here it is OK to talk about a difference in taste, and there is no point in trying to dispute it.

But there are so many issues where this would be nonsensical. If we prefer capitalism to socialism, this is not a matter of taste. Neither is it a matter of taste whether we prefer life to death, health to illness, happiness to misery or wealth to poverty. Such an issue can only come up when a man is so ill, or so disappointed, that he loses his taste for life. (Situations where Immanuel Kant would demand that he continues to live out of duty.)

Closely connected is the idea, so often repeated by Mises, that economics (and science in general) should be value-free (of wertfrei; for some reason Mises retains the German word). But this idea is contradictory on the face of it. It says that a theory should be “value-free” rather than “value-laden” – i.e. that such a theory is better than other theories – i.e. that is more valuable.

Now, I have not said anything about the very good things to be found in Theory and History. That will have to wait for another time.

(See also Is Action an A Priori Category?, Is Life Worth Living?, On the Objectivity of Values, and Objectivism versus “Austrian” Economics on Value. Also Ayn Rand and Böhm-Bawerk on Value.

Can Values Be Measured?

That depends on what we mean by “measurement”. They cannot be measured the way we measure physical object – length, weight, etc. But they certainly can be ranked.

Ayn Rand, in Introduction to Objectivist Epistemology, calls this “teleological measurement”. We rank values according to their relation to a goal or an end. So, for example, food, clothing and shelter have to be highly valued, since they are necessary for the mere preservation of life. Friendship, a happy marriage and a rewarding career are valued because they enhance our life and well-being. (You can think of other examples yourself.) But they cannot be measured with ordinal numbers; they have to be measured by cardinal numbers You can say that one value is more valuable than another, but you cannot express that numerically.

I came to think about this when reading Eugen von Böhm-Bawerk’s Basic Principles of Economic Value. He writes:

… let us imagine a small boy who wants to by fruit with a small coin in his possession. He can buy either one apple or six plums. Of course, he will compare the eating pleasures afforded by both kinds of fruit. To make his decision, it is not enough to know that he prefers apples over plums. He must decide with numerical precision whether the enjoyment of one apple exceeds the enjoyment of six or fewer plums. To approach the situation from a different angle, let us consider two boys, one with the apple and the other with the plums. The latter would like to acquire the apple and, therefore, offers his plums in exchange. After deliberating on his eating pleasures, the former rejects four, five, and six plums for his apple. But he begins to waver when seven plums are offered, and finally makes the exchange at a price of eight. Doesn’t his trade reveal a numerically conclusion that the pleasure of one apple exceeds that of a plum at least seven times but less than eight times?

I laughed when I read this – not because there is anything wrong with it but because it is ingenious. (Böhm-Bawerk is often ingenious!)

Well, in this case there is a possibility to numerically fix values and to do it by ordinal numbers. But it cannot be as exact as when we measure length or weight. It is an approximation, although within strict limits; in this example between seven and eight. It hardly applies to the values I mentioned in the beginning. Actually, it applies only to values that are exchanged, i.e. economic values. (Friends and spouses are not bought and sold!)

It does apply when it comes to budgeting our money – choosing what food or clothes to buy or what house or apartment to live in. Here we do reason the way the boy in Böhm-Bawerk’s example reasons, weighing for and against and arriving at a price we can afford for whatever alternative suits us best.

And there may be other implications that I haven’t been able to figure out yet. So take this post only as a stray observation!

(See also Ayn Rand and Böhm-Bawerk on Value.)


Get every new post delivered to your Inbox.

Join 642 other followers