The Symbolic Economics Approach to the Humanities and Humanitarian Practices
A little over half a century ago the study of history was given a powerful boost when the Annales school engaged in studying the economies of past eras. On the basis of archive data a start was made at that time on recreating, piece by piece, a picture of material exchange in order to investigate the distribution of wealth, the growth of institutions and the structure of markets. The result was a more complete understanding of the moving forces of history. Today’s priority is quantitative measurement of non-material processes. Without this our understanding of many phenomena is guesswork, and we can only shrug vaguely when asked for precise forecasting of choices society will face. Until recently attempts to calculate anything in the arts got nowhere because we lacked any halfway usable instruments of measurement, but now internet technologies enable us to tear aside the curtain concealing the quantitative parameters of symbolic exchange. There is nothing to keep us from moving from talk to action apart from a firmly rooted conviction that this is beyond our capacity. If thinking in the humanities moves in this direction it can look forward to a new upward spiral of development: if it does not, it faces stagnation.
3.1 Prospects for Humanities Research into Third-Generation Networks
3.1.1 Measuring Symbolic Exchange
A little over half a century ago the study of history was given a powerful boost when the Annales school engaged in studying the economies of past eras. On the basis of archive data a start was made at that time on recreating, piece by piece, a picture of material exchange in order to investigate the distribution of wealth, the growth of institutions and the structure of markets. The result was a more complete understanding of the moving forces of history. Today’s priority is quantitative measurement of non-material processes. Without this our understanding of many phenomena is guesswork, and we can only shrug vaguely when asked for precise forecasting of choices society will face. Until recently attempts to calculate anything in the arts got nowhere because we lacked any halfway usable instruments of measurement, but now internet technologies enable us to tear aside the curtain concealing the quantitative parameters of symbolic exchange. There is nothing to keep us from moving from talk to action apart from a firmly rooted conviction that this is beyond our capacity. If thinking in the humanities moves in this direction it can look forward to a new upward spiral of development: if it does not, it faces stagnation.
The only place substantial progress has been made is in information economics, where it is beginning to be understood that attention is a resource (Herbert Simon). In the process, science is on the verge of introducing quantitative methods in the symbolic field. However, there continues to be a lack of important data about the quantity and quality of information. The quantity can be more or less estimated, but what is to be done about the quality of information? There are no outward signs, professional judgements are scrappy and contradictary, and market indicators can prove deceptive. Science encounters similar difficulties even when merely measuring conventional utilitarian values. These too cannot be coordinated while they are in people’s minds, until they turn up in the marketplace and become manifest through willingness to pay for exchanges, through prices. Symbolic values cannot be measured in this way. For a number of reasons, then, it is not easy to quote them in monetary terms, even if we look to the markets. I have written in detail about this in my book The Economics of Symbolic Exchange. As an example, let me recall the paradox of uniform prices, when movies, books and sound recordings of entirely different quality are all sold at the same price. To some extent the value of the symbolic can be arrived at indirectly through calculation of the time, memory and attention which are condensed in a particular text, but this is a tortuous and thorny path which still does not bring the desired clarity. A number of significant factors remain unconsidered, some which precede the exchange (preferences, desires and motivations) and also some which follow from it (emotions, new meanings and satisfaction).
It cannot be said that the social sciences ignore exchange processes, or that people in the humanities find it difficult to use the corresponding terminology. Economic historians, for example, sociologists, political scientists and others analyse events using the categories of symbolic gain and loss which motivate interest groups, but these are on a purely verbal level. There is a total absence of numerical quantification of the concept of exchange, and that is the only thing which makes an exchange approach truly productive. When people talk importantly about “exchange”, when they explain so much by it, one would like to know what the stakes are, what is being exchanged for what and in what proportions. Alas, the parities involved in symbolic exchange could not be detected even recently. With few exceptions, data had to be scratched around for and pecked out a grain at a time. There was simply nowhere the data could be found in the necessary quantities. The richest source of information about non-utilitarian exchange was the markets of culture, but what they clarify is mainly what can be detected through monetary indicators. The box office shows us only the tip of the symbolic iceberg, that part which falls within the purview of commerce. This includes antiques, classical music, collectables, and anything else associated with scarcity. The rest, like poetry, goes unconsidered and yet, although free, some of it is priceless. What little we know about the value of leisure and attention is thanks to the advertising markets where these resources are traded more or less transparently for money. It is, however, the exception when money is a relevant indicator of value in the symbolic field.
It is sometimes possible to derive data indirectly from market indicators by using calculations based on a number of assumptions. Thus, the symbolic value of collecting art can be estimated if we track how far the potential revenue from selling pictures lags behind the return on investing in bonds. We find that canvases appreciate in value over time more slowly than gilts. Collectors are accordingly forfeiting income and their losses can be viewed as payment for the right to enjoy art, a monetary equivalent of the value of collecting. When serfs in Russia were to be punished for their misdemeanours they would be offered a choice between being whipped or paying a fine. This is convenient for the economist because it clearly defines the price of non-acceptance of violence, although our stoical forebears on the whole chose to be whipped. Such distinct tariffs are rare. The markets in sport, medicine, and war also give some understanding of the monetary equivalents of values in the humanities, but it is far from complete. On the whole, money and markets are poor quantifiers of non-utilitarian values.
As far as identifying the proportions on each side was concerned, symbolic exchange until recently bore no comparison with material exchange. Now, however, its contours are visible and it seems likely that in the near future it will be numerically quantifiable. Social networks on the internet are where the pattern of non-material exchange is to be found. Networks have drawn in a significant proportion of human activity and, importantly, have codified it. Internet groups are an excellent model for researching the whole of society, not only because in many respects they copy real life, but also because the processes which occur in them do not need to be processed: they are already in the form of postings and the structured actions of users. Huge numbers of people leave traces on the internet of what they are doing in their lives, both online and offline. These indicate not only the event and date but also come with emotional colouration which, in third-generation networks, is overtly expressed in ratings and points awarded. This provides an opportunity to measure resources input, for example the time spent on communicating in a social network, and to see the resultant output expressed in the user’s subjective rating. Web 3.0 sites allow us to observe the reaction to actions and objects not only on the internet but also in the outside world. The action occurs in reality, while the consumer’s attitude to it is registered on the site. Internet activity does not, of course, reveal life in all its fullness, but it is an excellent testing ground. From the flow of internet transactions data can be extracted which is of interest to a wide spectrum of sciences like sociology, political science, psychology, cultural studies, art history, linguistics and institutional and behavioural economics. What historians fish out of archives and sociologists from questionnaires will swim of its own accord into nets for researchers to harvest. Possibly the richness of the networks will engender that interdisciplinarity on which so many hopes have been pinned for so long.
Values are in people’s minds and become manifest in the course of exchange. The fuller and more visible the exchange, the more detailed the statistics about acts of exchange, the more clearly will we be able to see what people are prepared to part with in order to acquire a particular good or service.
At the risk of repeating ourselves, the sole means of measuring values is to place them in conditions of intensive, remunerated exchange, that is a market. Exchange processes in networks are moving towards the market model, but as communications become more dense, and when in the future the internet, television, and telephone communication all converge on a single platform, these exchange processes will occur ever more plainly in accordance with market laws. At the same time, the proportions of symbolic exchange will crystallise out and become particularly evident with the introduction of post factum payments and monetisation of social networks generally. In the Stone Age too it was not immediately clear how many bananas should be exchanged for a clay pot. Having once begun, the calibration of exchange correlations will become irreversible. When even a small part of what we know about material exchange has accumulated, this will have a powerful influence on awareness of tariffs of symbolic exchange and will ricochet to affect the course of actual processes. At this stage the second invisible hand of the market will begin to act, in accordance with fundamentally different principles. Like Adam Smith’s invisible hand, its main driver is data about the observable acts of participants.
Without a doubt, we will shortly learn to extract a mass of useful information from the rich seam of logged human communication on the internet, even if today analysts do not quite know how to approach the task. There is, of course, the highly sensitive question of what we want to learn about people and life from the internet. Users seem to be producing actions, moving like Brownian particles under a microscope, but we are not sure what we should be looking for. This is probably why the predominant research approach is to view users as potential customers. At least this makes it clear who will fund the research. Works on the economics of the internet and social networks, however, can be counted on your fingers. Everything that can be found on this topic was written in the last millennium when networks as they exist today were simply not there. The sociology of social networks has also not advanced far beyond corporative marketing, with its emphasis on delivering targeted sales.
To summarise, let us return to our principal thesis: everything is now in place for quantitative measurement of symbolic exchange. It is essential to focus the various research forces on the internet, since that is where a convenient bridgehead has appeared for making a breakthrough. Timely as this call undoubtedly is, it will not be enough on its own to move things forward. If exchange is conducted by barter, as it mainly is on social networks, the quantitative parameters we are looking for will be difficult to detect; and if for money, the measurements, as we have already noted, introduce more confusion than clarity. We will next need to resolve the issue of a unit of measurement of symbolic value, without which our intentions will not be fully realisable. We shall return to this a few chapters from now. For the present, though, let us just partly show our hand: symbolic exchange can be represented in numbers by measuring the input and output of what drives it: the transformation of objective time into subjective personal value. A person seeks to exchange the time at his disposal for something which will bring meaning to his existence and evoke a positive reaction in his mental make up, whether by socialising, reading or study. We can characterise this transformation generally by modifying the renowned formula about capital to read: “time–text–time*”, where “*” signifies the increased quality time, and “text” stands for any symbolic product in objective form. In this formula “text” occupies the same place and plays the same role as “money” in Marx’s formula “goods–money–goods”, mediating the transformation of goods into better goods. Just as money accumulates capital in itself, so texts accumulate symbolic capital. Those two kinds of capital can be exploited with variable effectiveness, an understanding of which is facilitated by traditional or symbolic economics respectively. In the social networks the general idea of symbolic exchange assumes concrete form: who is exchanging what with whom, in what proportions, at what moment in time. All this extremely valuable empirical evidence is there, waiting to be collected and analysed.
3.1.2 Studying Dissemination of Information
The next major topic after measurement of exchange is the dissemination of information. Of utmost importance in the information society are signals and the mechanisms by which they are spread: their points of origin, their power, their trajectories, their duration and geographical coverage. Understanding the spread of information is even more difficult than measuring exchange, and of equal practical importance. Whereas the correlations to be observed in exchange are by and large static, here, by definition, everything is dynamic. Here too, however, electronic, especially third-generation, social networks offer a new and very promising bookkeeping tool, since the information they contain is both registered and processed in vastly greater detail, while the communications themselves are organised in a more complex manner. By comparison with ordinary networks with their blogger potpourri of actual postings and comments or quotations, a Web 3.0 network contains far more valuable data. There are personalised profiles, collections of ratings behind which a subject with a name and surname is visible, along with his socio-demographic characteristics and consumer history. In brief, there is a full-blown cultural anamnesis. On Web 2.0 sites it is usual for ratings, if they are collected at all, to be confined to a narrow range of topics from which no overall understanding of the user’s preferences can be formed. With Web 3.0, however, each of these reflections comes with a precise date, so that any action can be analysed in the context of other actions, and any rating within a system of other ratings. By monitoring network communications we can detect, for example, the moment and place in society where particular processes appear, who takes them up and why, by which routes and at what speed they spread. This enables us to track the dissemination of social awareness deep inside society rather than relying on the sociological samples or media voxpops which currently have to serve.
At one time the consumer had the choice of objects with various pluses and minuses, but when a certain level of social prosperity was reached, the social signalling function of objects, how they were perceived by those around, began to outweigh their directly utilitarian functions. Relations between the real and the perceived became more complex until a point was reached where signs became an end in themselves and capable of overshadowing and manipulating realities. They can easily signify something which does not actually exist, or which is at variance with reality. These are so-called simulacra. They are almost given official permission to bear false witness. The same applies to signals: they can refer to non-existent or wantonly interpreted signs, processes or events. (By “signs” we are referring here to something which contains information, and by “signals” to whatever transmits it.) Just as signs can be more important than the actual nature of things, so signals can prove almost more important than the facts they make known. The greater the importance of communications in our lives, the more significant do the treatment and ways of disseminating information become. Often, correctly presented information can be decisive. It programs subsequent behaviour. Signals can be transmitted selectively and interpreted arbitrarily to produce a reaction in the masses not based on an objective understanding and hence difficult to foresee. Networks enable us to track the routes taken by information not in general but individually, literally person by person, and equally individually to register their reaction. In the past there was no way of approaching this problem and, until it could be solved, humanity had no prospect of making progress in its desire to reduce the uncertainty of the future.
It can be sufficient to show people a new truth about themselves for whole classes to start behaving differently. The material preconditions have to be present, of course, but they are never sufficient on their own; skilled interpretation is more important than anything else, as Marx and Lenin demonstrated in practice. Why confine ourselves to ideology, though? A century and a half ago it took Pasteur decades to inculcate new standards of hygiene. People either believed him or they didn’t, and depending on that they washed or did not wash their hands. If convictions were that important in Pasteur’s non-ideological issues, which were a matter of life and death, how influential must they be in the New Economy where mind is so plainly influential over matter. Attitudes towards youth, age, beauty, charity, the upbringing of children, contemporary art, money-lending, dissidents and so on all result from work on consciousness, to some extent purposeful and in the service of particular interests, and to some extent spontaneous and not specifically managed by anybody.
For example, attempts are being made in developing countries to establish the boundaries of a middle class, which is seen as promoting stability and prosperity. An effective move would be to devise the portrait of this stratum in such a way as to include as many social groupings as possible. A mere reclassification by the media would allow people who believed themselves poor to take a step up the ladder in their own eyes and would thereby help them on to the strait and narrow path of consumerism. Teachers drag hopeless pupils tottering on the verge of failure up into the next class in the knowledge that they have only to believe in themselves for a powerful updraft to be engaged, which will grow even more powerful as they approach a goal which had initially appeared unattainable. Inner motivation is the key to understanding the effectiveness of signals, even when they are flattering or deceptive. Where there is information competition, everything depends on who proves more convincing and whose initiative is taken up first. A signal given at the right moment programs expectations and these materialise in a willingness to rally round particular initiatives, to contribute or withhold one’s energy (vote), and accordingly to influence the outcome. People feel a need to take their place in processes where what matters most is not “what?”, but “who with?”.
At the present time we know in only general terms about the dissemination of information in society. This does not let opinion-formers of various kinds off the hook of being expected to manage social perception. To a large extent they are flying by the seat of their pants and naturally make mistakes in abundance. There is no point in blaming anybody for this, since people always have a measure of freedom which introduces unpredictability into how they will react to a particular message. For example, in a period of financial crisis you need to be certain, when deciding whether to inject money into the economy, that citizens will increase consumption and not just decide to hoard resources for a rainy day. Alas, a government can only guess because there is no unambiguous advice in works on macroeconomics. Even less are there hints on how to nudge public reaction in the desired direction. We cannot boast even a framework for understanding dynamic information processes. Public opinion can turn one way or the other depending on how information is presented. A barrage of unfavourable commentaries, and even what seem to be the most reasonable decisions, taken in full accordance with available theory, may have the opposite effect to what was anticipated. Then again, moves which the same theory suggests are highly unpromising may prove successful.
Judging by the relatively smooth course of the 2008–2009 crisis, jobbing economists have been able to put out the seats of the fire. The Russian government selectively helped national corporations to settle with their foreign creditors, saving them from facing “margin calls” (a forced sale of securities at knockdown prices). Public perception of the move tended to be negative. Supporting favourites with state funds when others were not provided with similar resources was indeed morally questionable. Nevertheless, these and a number of other demonstrative measures, signalling firm if not fully implemented intentions, produced the desired results. Bankers took note of the signal and did not start demanding the repayment of debts as vigorously as they might have. The threatened avalanche of margin calls was successfully averted.
Politics is largely a matter of management through the creation of legends and leaks. That is what it does, getting the millstones to grind with a minimal input of effort. The theory of how this works is embryonic, but elaborating it would enable us in future to reduce the role of political instinct and improve the quality of calculation. At present the dissemination of information is at best something going on in people’s minds. Serious modelling is far beyond our current capability and only game theory is working on this. In public debate about the failure of a particular project we never hear mention of the dissemination or lack of it, reliability or lack of it, or the fullness or lack of it of information. The reason is only too obvious: a lack of theoretically grounded arguments. There is not even any halfway accepted vocabulary current in society to ensure that the whole area is not regarded as pure hocus-pocus. The result is that complex matters are outrageously simplified, with no account taken of important factors. That is damaging. While everything is at this level of tight-lipped oniscience we can only passively observe the swirling of the great sea of events in society. By and large, nobody will be able to make any sense of either minor upsets or global social cataclysms. We will have to make do with commentary with the benefit of hindsight.
Precise, systematic data about the dissemination of information can be extracted from Web 3.0 social networks and, as of now, from nowhere else. Not, at least, in the quantities we are looking for. Researchers face a number of difficulties as they wrestle with multiple interconnections between reality, attitudes towards reality, and information about those attitudes. The superimposition of accretions of differing levels of complexity will afflict them with that same sense of never being able to get a handle on anything as affected the philosophers of the School of Athens.
The situation is reminiscent of the spy game “I know that you know that I know…”, which even game theory seems unable to get to grips with. This is especially the case in practical instances when it really is not known who is informed to what extent about what other participants know. For all that, no matter how intricate the game appears, it is never too difficult to be mastered by the strongest players, who are able to see through weaker partners. For all its labyrinthine complexity, the system is not unfathomable. Two seams can be separated out: the views or standpoints which directly determine actions, and what influences those views.
We are primarily interested in one, possibly the most important, aspect of this influence which derives from the group nature of the structure of society. People initially react to new facts (or what they are told about them) to the extent that they impinge on their own private interests. In the course of subsequent private and public communications, individual positions become aggregated and adapt themselves to group platforms. It is important to understand how and why this happens. Personal interests alter both as a result of new understanding of a situation, but also as a result of observing which platforms are most rapidly acquiring supporters and becoming viable. There are two crucial influences at work here. The first is which facts are made public and how they are presented, what is made made known and what is kept quiet. The second, and most important, is how much information people have about the potential number of supporters particular interest groups have which they might consider joining, and also how many supporters the groups currently have and how fast their ranks are growing. As regards the filtering of information, the principle is well enough understood because a number of stratagems have long been in use for manipulating society’s choices. Nevertheless, how the information cascade operates and the pathways it follows remain unknown. In order to understand how people act in particular circumstances, we need to know not only their attitude to those circumstances and their assessment of the likely final relative strength of various forces, but also the information they had when they made their decision and the extent to which they were following the herd. Figuratively speaking, what odds are they anticipating when they gamble on a particular group winning? This question is similarly posed in game theory.
The inversion of Marx’s dictum to read “consciousness determines being” (or analogously, “mind over matter”) has become a commonplace frequently used in discussions of mind games where thinking is unfettered by logic. But what determines mind? For as long as the informational substratum in which personal and social principles mature remains unresearched, the phrase “consciousness determines…”, which appears to explain something, is a mere smokescreen. Until it can be translated into numbers, it will remain as impenetrable as the word “exchange” used to be.
Quantitative research into symbolic exchange
Analysis of the dissemination of information
3.1.3 Modelling Group and Network Effects
Social networks are bursting with data for the above two research approaches. They have the further useful characteristic of enabling the analyst to get out of his study into real social fieldwork without suffering the usual associated stresses and strains. As traditionally practised, investigations in the field raise endless problems: the selection of representative groups, elimination of obstructions and “noise”, correct formulation of questions, optimisation of data collection costs, and so on. In a collaborative system much of this is no problem at all.
The social sciences peer into the future by extrapolating from the past. Ideas about tomorrow are projections from reconstructed histories, motivations and institutions of yesterday. A different approach tries to establish people’s current attitude to something and attempt a forecast on that basis. Both paths are imperfect, as marketologists bringing new products on to the market know, and as those who commissioned their research know even better and to their cost. It is futile to survey putative consumers as to whether they will like some as yet unknown ware and what it should be like to make them like it. How would people have reacted to someone who forecast the popularity of Tamagotchi virtual pets or ringtones? Novelties wow the world without needing any pre-existing demand.
The methods of social prediction currently in use have more than a few shortcomings. Interviewees fantasise about what they might want if they were subject to no limitations and could ignore the cost of their answer. Some of the sharp corners can be avoided or smoothed over if the fieldwork is meticulously prepared, although few people actually have the energy to do this. With the traditional approach, however, there always remains an insuperable obstacle: a person interviewed individually or in a group is not the same person as he will be when acting in a real human environment. The circumstances in which they are talking to the interviewer have little in common with those in which they will act in the future. The main thing missing when answers are gathered by interview is that only as matters progress does it become clear what the group norm is going to be, how actions are going to fit with social pressures and social acceptability. Real-life behaviour takes its cue from leaders and trends and the attitude of people from one’s own circle towards them. This dimension cannot be modelled in a research laboratory. An individual or small-group based result when projected on to the actual behaviour of large numbers of people can have an unanticipated margin of error sufficient to change a plus to a minus. This is the Achilles’ heel of all experiments conducted with conveniently available human resources, whether the students of a particular professor or a different cohort. The effect is quietly passed over, but deprives the results of the greater part of their credibility. Results obtainable from a small group are not representative because of the group and network effect. They may differ startlingly from the behaviour of a large community because social solidarity has a decisive effect on the dissemination and perception of information, providing an external source of motivation. Ignoring this in experiments is tantamount to researching a tornado in a glass of water.
It is not that in large-scale social networks on the internet this very basic problem can be avoided: it simply does not and cannot exist. Collective action can be investigated there in its vast natural dimensions, not crammed into laboratory conditions. The experimental world merges with the practical world, something one could only dream of in the past.
Let us move on from these generalising remarks about methodology to examples of research which can be conducted within social networks, and let us begin with an instance which clearly shows up the limitations of traditional methods.
3.2 (Un)Happiness Economics
A serious hindrance for economic policy and economics generally is the lack of relevance of data collected by traditional methods. This is particularly noticeable when, for example, there is a need to correlate the state’s economic strategy with how people are going to feel after it has been implemented. Governments do not control the quality of life either completely or directly. They can only bring about improvements in certain aspects, but which are the right economic goals and which are wrong? Which goals will citizens be grateful to see achieved? Economists need to set the overall direction of economic development so as to increase the perceived quality of life. They also need criteria to enable them to track how effective the political line chosen is proving, and to make comparisons between different countries.
On the level of common sense and common parlance, outlining the ideal towards which we should be striving might not seem all that difficult, but in fact devising workable criteria proves exceptionally difficult. The traditional approach is to identify the most important components of living: the basics, like food, accommodation, heating, clothing; then socio-cultural and educational standards; medical care; and the appropriate level of political and economic freedom. A system of quality indicators is then elaborated for each group. Most widely used is the Human Development Index (HDI) which brings together the features of life expectancy, quality of education and real GDP per capita of the population. Whether using the HDI’s methodology or any other there are difficulties in devising a unitary indicator making it possible to synthesise fundamentally different features and to reflect real life convincingly. A first stumbling block becomes apparent with the effort to decide what to include in the system of indicators, and what to leave out as being too difficult to quantify. These are very evident in the educational and cultural parameters. If income and life expectancy can be calculated fairly accurately, there is no simple solution for assessing quality of education or religious freedom. Even if we could measure everything agreed to be appropriate, how are the heterogeneous parameters and logics to be reduced to a final index. Estimating the appropriate weighting is the next greasy pole. When at last, despite all these difficulties, we have a system, admittedly burdened with compromises, a final problem arises as to how to deal with the heterogeneous distribution of goods between people. Economists and politicians have average values at their disposal, but how are they to be interpreted if that quantity of goods is not available to a majority of the population? The HDI has been published since 1990 and, despite its shortcomings, is regarded as a useful checklist. In practice, however, discussion is usually restricted to the more workable index of Gross National Product per head of the population. In other words, the conclusion is that the more goods and services there are in circulation, the better.
Right, then: economists judge a country’s development on the basis of Gross National Income (GNI), the Human Development Index, or some other complex of measurable indicators. We need to understand that all these methodologies work better where there is a long way to go before basic needs are satisfied, that is, in poor countries, since many aspects of life there are closely linked to Gross National Income. As regards the “Very High Human Development” states (in the HDI rating these are the top several dozen countries), the weighted indicators balanced there are clearly unsatisfactory because the existential pulse of the society is overlooked. It can be that, despite thoroughly enviable ratings, people feel awful (as we can see, for example, from the decadent mood showing through the cosmetic mask of feature movies). To try to equate rank in the HDI ratings with how a country’s citizens feel is the equivalent of judging a play’s quality by the amount of scenery and stage props it has.
We can try to deduce the quality of life from the indices, but alternatively we can directly measure subjective satisfaction with life and see how closely it matches the indices. This is a way of checking the efficacy of the indices and making adjustments to them. Indicators are of value only to the extent that they help us to make people happier, which is what happiness economics attempts to do. This is a subdivision of economics which since the 1970s has been studying the correlation between objective indicators and how people actually feel. Respondents are asked directly whether they are satisfied with their life or not. The questionnaire data suggest that a sense of well-being is linked less closely to economic indicators than is commonly believed. This seems particularly to apply once a tolerable standard of living has been achieved. In different countries this ranges from US$200 to $1,200 per person per month. Below this level an increase in income has a clear positive impact on satisfaction, mainly because of a decrease in the sense of unhappiness which arises from feeling defenceless and oppressed by living in straitened circumstances. Above this level the satisfaction curve all but ceases to rise. This chimes in with data obtained from the sociology of labour and, in particular, with research into the motivation of managers. At first material recompense stimulates people to throw themselves into their work, but this stops when a certain income level is reached. It does, admittedly, restart much further up the scale, but only at very high bonus levels. In other words, although people believe that the more money a person has the better, that “better” may not be so significant as to justify the additional effort needed to earn it. If people are not hungry and already have communications facilities, then being provided with more tinned tomatoes or telecommunications will not make them much happier. This suggests that the picture obtained from lists based on economic parameters may differ from what people are actually feeling as greatly as a snapshot taken in a photo booth differs from a photographic portrait.
Although this assertion seems very obvious, actual economic policy ignores it. It carries on as if there were no lack of correlation between what it achieves and what people perceive. Intuition suggests, however, that the gap may be very wide indeed and that consequently this is an issue requiring to be studied. If our suspicions are correct, there is a need to amend both the indicators and the policy.
Numerous surveys have shown that an increase in income cannot be directly converted into an increase in happiness. There are a whole variety of reasons for this, of which the most obvious is that what matters to a person is not the absolute level of goods they possess, but that the level should be rising. The process of improvement rejoices the heart almost more than the result. People become accustomed to the level of well-being they have achieved and begin to regard it as no more than their due. They need to see improvement, but the better the situation they find themselves in the more difficult it is to achieve perceptible improvement. Gossen’s law of the diminishing marginal utility of goods partly applies also to a complex of goods. It is to the merit of economists that this is taken into account when calculating HDI. A result is that people feel they are stagnating. They don’t feel good. This is why it is no bad thing for the process to proceed smoothly and for improvements to take place over an extended period. Contentment is thereby prolonged.
A second, more subtle, consideration is that if we represent the correlation between contentment and objective improvements with a graph, the curve will resemble a flower petal leaning to one side. In science this form is called a hysteresis loop. Upward movement along this curve is delayed, while downward movement is accelerated. If there are swings of well-being upwards or downwards, the positive or negative part of the amplitude by which the emotions swing do not coincide. In other words, losses make people comparatively sadder than analogous gains make them happy. It is preferable for the rise to be steady and without fluctuation, even if that means there are no great leaps forward. If there are dips, there will be net emotional losses in the hysteresis.
This emotional asymmetry in perceiving life’s ups and downs has a consequence which is not immediately obvious. Since there are many parameters of quality of life and not all of them will be rising at the same time, a hitch anywhere may subjectively outweigh all the gains, even if they have been achieved on many fronts. The system of weighted indicators takes no account of this, but it should. Recalculating these “downs” to their true weight can radically alter the picture.
To be completely accurate, we should also mention that events which occur less frequently evoke a hypertrophic emotional reaction. A kind of trigger operates which scales emotions up or down depending on how frequently they occur. For relatively well endowed societies success is the norm and unpleasantness is relatively rare. This means that success evokes a modest positive reaction while unpleasantness produces a sharply negative one. Those living in certain poor countries demonstrate the opposite. Their governments find it relatively easy to make their subjects, whose lives are hard, happy by tossing them a crust from time to time. People get inured to misfortune. They react less sensitively to what is just the latest disaster. At the same time they are pleased by small things. Mechanisms of emotional adjustment kick in which mitigate inequality in standards of living, blunting or sharpening perception of more frequent or rarer occurrences. The psychological metabolism of the individual and the society maintain a stable proportion of the positive and the negative and it is quite difficult to produce a relative shift from a level which has become established. We might even half seriously propose a law of conservation of happiness analogous to the law of conservation of energy. Empirical data is presented in Appendix 1 in support of this.
Mechanisms of emotional asymmetry benefit the poor and from this a number of conclusions follow in respect of policy aiming to overcome inequality. Giving priority to the initial phase of gains will target that sharply rising part of the satisfaction curve where the first small results give a marked increase in the feeling of good fortune. There should really be an area of happiness economics called unhappiness economics or, more precisely, “anti-unhappiness economics”, to teach us how to climb out of unhappiness. Let those who are protected from major disasters worry about happiness. The more so since the surveys show how difficult it is to get happiness right. You can’t just turn on a tap.
Interesting effects apply also to the well-off. Gossen’s law of decreasing marginal utility of goods, including a totality of goods, operates as a brake on happiness, but where a complex of goods is concerned it is only partly applicable. Decreasing levels of satisfaction are obtained from collections of goods where their overall consumption does not produce a cumulative effect, but collections can be assembled in which there is a far greater increase in value than arises from each constituent in isolation. This applies to socio-functional consumption like education and creativity, or when a transition to a higher quality is ripening and needs only a little more to make it happen. Then, despite Gossen’s law, an anomalous upsurge of satisfaction is observable. This rule, together with emotional hysteresis, modifies the happiness curve. It normally levels off, but here the graph looks different. Upon achieving a certain level of economic prosperity the indicators can soar upwards. This cumulative increase of value is exceptionally important in the New Economy. Traditional economics, both in theory and in practice, has a blind spot here, with the result that to date it has rarely been used effectively.
A further known cause for a discrepancy between indicators of quality of life and the individual’s sense of well-being is comparison with those around you, your immediate circle. If GDP is rising, it is doing so for almost everybody at the same time with the result that there is an insufficient change in a particular individual’s rating in comparison with his neighbours to generate a surge of positive emotion. By extension, a comparator which is remote in time or space is felt less keenly (except for comparison with our own youth, with all the accompanying embellishments of memory). On the whole, comparison with flourishing Danes or impoverished Papuans does not have much effect on how Russians feel (except perhaps about the glories of European socialism as depicted in the media, which we might find less palatable in reality.)
Summary indices, then, are far from summarising everything they should. There can be minuses left outside the brackets which are capable of cancelling the observed pluses. These include such things as work-related stress, anxiety about the future, the breakdown of family values, the loneliness of the big city, and culturally inspired problems with ageing and appearance. The reverse is also possible, where minuses are taken into account but not the pluses. A country may be a world leader, for example, in devising laws and standards to regulate complex, ambiguous issues like cloning, euthanasia, chemical stimulation of the brain, gay marriage, the treatment of domestic animals, problems of copyright, and receive no credit in the index for experiments which the whole world follows with interest.
The system of indicators appears to give no credit to North Americans for their unparalleled legal constructs regulating sexual harassment, consumers’ rights, or the protection of children. It may be far in the future before this receives its deserved recognition. There are whole areas which have a marked effect on happiness but where objective indicators, including financial indicators, are sure to work badly. These include culture in the broad sense and cultural consumption in a narrow sense.
Although happiness economics operates with information obtained directly from people, and in consequence puts itself forward as almost a Supreme Court in respect of indices, its own methodology is not beyond reproach. There are difficulties in defining the moment in time for which the survey results can be considered valid. Obviously there can be delayed effects not evident at the time of the research, so that the results can never be regarded as final. Intensive economic growth, which inclines people to rate positively, goes hand in hand with emotional exhaustion and underinvestment in cultural capital. Everybody is pursuing money, but without a proportionate increase in symbolic capital the higher degrees of satisfaction will not be attained. Rapid growth additionally takes a bite out of the next generation’s quota of happiness by pushing up the bar of expectations and engendering a dangerous split between the ideal and reality, between what young people aspire to and what they can realistically hope for. Children suffer the consequences of a misconceived parental sense of duty which aims to create favourable conditions for them but simultaneously eats away at the potential for them to enjoy achievements of their own. It is no longer the case that “children are where nature takes its holidays”. Instead children take their holidays in accordance with human nature, and the older generation by allowing them to live with everything provided deprives them of the incentive to be energetic. The trajectory of a prosperous society is often likened to going up a staircase which leads downwards. That is not mere words, although it might be more accurate to liken it to trying to run up the down escalator. The pace achievable by the fittest and most successful members of society may be beyond the reach of the majority. Meanwhile the leading social groupings, maintaining their distance from the rest, shielding themselves from a sense of stagnation, continue like meths addicts to rush to new highs. But for them, as for all the others, this leads to a psychological overloading which can cancel out everything that has been achieved, as is signalled by the highly developed industry of psychotherapy. Every dollar it earns should be multiplied by 100 (if not by 1,000) and deducted from the measure of GDP.
Economics cannot really be reproached for failing to increase happiness, since the area it is responsible for is not happiness but development. If the indices are reflecting anything, it is not happiness but the provision of conditions in which a person may discover themselves, enjoy freedom to choose and surmount their circumstances. It is the complexity, the satiety and diversity of life. That is why they are called development indices. We would have far more justification for judging the validity of indices by subjective feelings of satisfaction, as happiness economics does, if people associated happiness primarily with personal freedom, but it is only a few individuals who see things in that light. Freedom and complexity are an inseparable couple, with the result that by no means everybody is free and happy at the same time (just as not all who are unfree are unhappy).
A further compromise inevitably present in happiness surveys is caused by the difficulty people face in attempting to summarise how their life feels over a longer period of time. This is why the rating of a calendar year will be heavily dependent on whether the question is asked when the interviewee is going through a good or a bad patch. A further element of randomness finds its way into responses from the fact that the present is not a unity but structured in three-part blocks of experience which include memory of the past, the present as experienced at a given moment, and the future as it is envisaged. It is uncertain which of these elements will be selected by the introspective eye, which emotions a person will voice during the survey.
The next problem arises because it is easier to rate something if you have something to compare it with; that is, if a person can simultaneously measure several comparable acts or periods against each other. On a recommendation site the users rate movies, celebrities, hotels, mobile telephones, and much else, but there is a knack to this apparently straightforward procedure. After rating several dozen or several hundred objects you may suddenly realise that you initially gave too high a rating to something right at the beginning, which has resulted in unfairness to objects experienced later. For example, you may have impulsively awarded “10” on a 10-point scale, only to come across something better. This is why judges of aesthetic forms of sport hold back the top scores for only the most outstanding performances. Because people feel the need to rerank their impressions, they should have an opportunity to review their ratings at any time. Overall ranking with the aid of paired comparison analysis is a rational algorithmic procedure which gives good results, but does not work in the relatively infrequent happiness surveys because people in most cases have nothing to compare.
To conclude: happiness surveys are not entirely harmless. Having once given voice to his rating, a person will start believing what he has said. It becomes a fact of his existence. It is entirely possible that the Russian habit of denigrating all things Russian is a reaction to an alarmingly low position in ratings. For all their drawbacks, happiness surveys are undoubtedly useful, but no matter how far their results diverge from and discredit traditional indices, economic policy will continue to be based on the latter. Governments, as indeed their peoples, tend to press on regardless in order not to look worse than others, even though it might only be necessary to re-adjust the measurement procedures to make life in a number of countries seem not to be all that dismal at all. The problem all along may just have been that those states were competing under rules ill-suited to them. It is much like entering a long-distance runner in a sprint and then disparaging his performance. When they find themselves in a similar position, nations may engage in excessive self-excoriation, develop inferiority complexes and even follow someone else’s path, when in fact the problem lies in the way the scores are toted up. They should be glad to have the advantages of those who are catching up because they have a potential for improvement which the leaders may already have exhausted. No less importantly, they have the opportunity of learning from other people’s mistakes and taking shortcuts. The leaders too have their problems if they allow themselves to be mesmerised by the indicators, pedal faster and faster, and finally collapse in exhaustion.
Vaulting ambition led to the world financial crisis of 2008–2009. In pursuit of raising a partly fictitious quality of life, labour resources were overstrained. Some found themselves incapable or unwilling to work more intensively in order to consume more. This revealed the Achilles’ heel of the New Economy with its focus on desires. An excess of goods poured off the conveyor belt, many of which could have been dispensed with without causing any real hardship. At the first serious setback customers sobered up and the mirages which, with the assistance of marketing consultants, filled their heads vanished in an instant. It became clear that a whole range of goods were not essential for personal happiness, especially when those around were tightening their belts without evident ill effects. Many cut back on consumption, dealing a cruel blow to manufacturers who found themselves working in a very sluggish market. Initially economic agents do not know which part of the assortment should be reduced and by how much, which causes them to panic. Uncertainty about the production plan spills over into uncertainty about jobs and investment. As there is no way of knowing which new business projects will be profitable tomorrow, they don’t get off the ground, while those that have are frozen. This means the capital invested in them is neutralised, while credit liabilities crush borrowers. Immense resources have been put in play to keep ahead of desires, all those third and fourth houses and cottages by the river which were going to be lived in for 1 or 2 weeks a year at most.
An economy, designed to anticipate every trend in consumption, had only to falter in one place. As soon as this became widely known, manufacturers were petrified as the new climate sent business plans into negative territory. Humanity’s baggage of desires, entirely manageable only yesterday, was suddenly too heavy to carry. The balance between anticipated utility from imagined goods and the frenzied pace of work needed to pay for them no longer correlated. No conventional indices reveal disjunctions of that kind. Data from happiness surveys might have signalled that something was wrong, but even they were no help in warning where the threat was coming from.
We can expect a radical improvement in prediction if we can organise systematic measurement of satisfaction levels and adjust indicators of social well-being accordingly. This is entirely realistic, bearing in mind that collaborative social networks are literally one step away from being able to provide the primary data required. They will enable us to move from sporadic survey campaigns, with all their inaccuracies, to a workable reflection of the quality of subjective time. It is a major plus for social networks that people use them for their own purposes and have no interest in observation and prediction. They register their own consumer experience (without which they cannot obtain recommendations), and as a result create detailed reports of their feelings about life. The data is collected routinely, without requiring a special effort. A subjective picture of well-being is deduced from the ratings accumulating in the database, which is more complete and more precise than that obtained by using offline surveys. The “respondent” is not placed in an artificial situation where his judgements are slewed by the fact of the survey itself. He does not answer questions about happiness but is engaged in something quite different. He gives his ratings as routinely as he makes his purchases and is not in the least interested that, by representing the demand side, he is activating the invisible hand of the market. If, however, it turns out that the user’s current ratings are mostly positive although he states in happiness surveys that overall everything is terrible, that merely signals that something is wrong with his inner scale. Either he is misinterpreting the realities of his life or he is singing from someone else’s hymn sheet. Possibly, when he looks at his own ratings, he will realise that all this time he was entirely happy. What a relief!
The difficulty of detecting the substance of happiness prompts us to seek a new language capable of conveying socio-cultural dynamics. The need for this has long been felt. Let the reader not be deterred by the academic phrasing of this issue. It has a direct bearing on everyday life, since being informed, having knowledge and understanding, are synonymous with action in the New Economy. A system of measurement is needed which can detect subtle movements in society dynamically, on the whole accurately, and attached to specific times, strata and groups. We also need an adequate descriptive language to replace that jingling of the fool’s bells of populism which reduces a complex system of factors to primitive dilemmas, after which everybody just tries to see who can shout loudest. The third-generation internet is an environment where there is no need to dumb down and simplify because this is where the quality of subjective time can be measured. The myriad ratings of consumer actions which accumulate in the database and behind which stands the quality of time, that is the raw material we have been seeking for multi-factor dynamic analysis. Every judgement is attached to a time, to personal data, and to a clearly defined event. Any act of consumption takes its place within a series of other actions performed by a particular person, and each rating can be analysed not in isolation but within the system of his experience of life. This manifest connectivity was effectively inaccessible to researchers, but now, with the right tools at their disposal, the humanities will be able to take the next step forward.
3.3 Measuring Happiness
Traditional economics engages mainly in a struggle against imbalance, scarcity, and inequality because these are the cause of many misfortunes. If, however, there should ever be an end to the hardship of material existence then, as Schopenhauer prophesied (and not only he), the place vacated by the battle against them will be taken over by boredom. Overcoming this anticipated misfortune is a matter for the new economy. Its fundamental concern is to engage people in something meaningful at a time when the most acute problems have been solved and interesting forms of activity are in abundant supply. Admittedly, many of them will not suit everybody. For example, not everybody has the skills to be a writer. What might a positive programme of happiness look like?
The fundamental message I would like to convey is that the ultimate function of the New Economy is to build up interpersonal communications. It is not to plot against homo sapiens, who has been transformed in recent times first into homo economicus and then, in our days, increasingly been demoted to the lowly status of homo consumerensis. Opponents of the consumer society claim that this metamorphosis is occurring at the behest of capital which requires ever greater volumes of consumption in order to sell its products. The true mission of the New Economy is to create conditions for the self-discovery of each and every one of us. This can be achieved by constructing and servicing groups catering for a great variety of demands. Collaborative filtering facilitates both. While the New Economy concentrates mainly on providing goods for already identified groups, the collaborative system will help to bring people together in groups and then provide for them to interact. The precondition for this is collection of users’ ratings or opinions, on the basis of which like-minded people can be identified and recommendations generated. Ratings are so interesting because they characterise both the consumption of products and those consuming them. These are aspects of a judgement about the quality of time a particular individual spends consuming a particular good or service. They enable us to derive indicators of happiness. We have only to summarise ratings of the time various people spend over a given period. The collaborative system is an already existing tool for measuring happiness.
No matter how one defines happiness, a correlation is clearly to be traced with positive experience of time. Without straining credibility, we can associate the two, whether happiness is accumulated piece by piece, like a mosaic, or whether it is spontaneous and all-embracing, as if a gift from above. When a person has many positively tinged intervals of time behind, in the present, and ahead in their life we can judge their lot to be a happy one. The greater the proportion of highly rated time and the less low rated time there is, the happier the person. In mathematical terms, the quantity of happiness is the integral of a person’s quality time. For a number of reasons the person himself may determine this integral very inaccurately. If every fragment of life without exception could be rated, the sum of the ratings would characterise happiness perfectly. For the present, to obtain totally complete data is not an option. Indeed, it is unattainable in principle, and it is not actually needed. It is enough that, as recommender practice expands into new areas, quantitative indicators will accumulate. With time the map of subjective satisfaction will be sufficiently detailed to serve for practical purposes. In a number of respects it is already perfectly usable thanks to the millions of users of recommendation sites who systematically rate the movies and plays they have seen, books they have read, and so on. In so doing they have documented an appreciable proportion of their cultural activity and, in order to understand how it influences the sense of happiness, we have only to multiply the assessments by the length of time involved. Such calculations are not difficult, given that the duration of particular types of cultural consumption is fairly standard. For simplicity’s sake, we will disregard the effects of post-experience and aftertaste. We can thus weight the proportion of cinema, theatre, music and reading. Let us suppose that over the course of a year a person has watched 200 movies and the distribution peak of the ratings he has given is “8”. From this we may deduce that the cinema industry has presented him with some 300 h of, by his own criteria, extremely high quality time. If, after five theatre visits he has not once come away with impressions deserving more than a “5”, which signifies “average”, then he has misspent 15 h and they should be subtracted from his symbolic income. On this simple example we can see the kind of practical use which can be made of observation of ratings. Either he should stop going to the theatre, accepting that in its current state it is not to his taste, or he should take greater care about the productions he chooses (perhaps making better use of the recommender service).
The very first analysis of data extracted from ratings promises to yield new knowledge. For example, if we measure volumes of quality time and extrapolate it per head of the population in areas like cinema, literature and music as far as is currently practicable, we will be able to establish the current norm of happiness. In addition, from the ratings statistics we will be able to derive directly the distribution of personal time and money budgets between different types of leisure, and to study for each individual the conversion rates of objective time into quality time. It will also be possible to compare the financial cost of different ways of achieving the same degree of satisfaction or, from the opposite angle, to establish differences in the quality of time acquired for a particular level of expenditure. If, for example, it is made generally known that movies and shows on free access TV channels on average rate 2 points lower than those on pay services, the number of subscribers to the latter will increase. Such are some of the ways we can work with the statistical data which accumulate in a Web 3.0 system. There is more detailed discussion of this in the Appendix.
3.3.1 Measuring Subjective Time
Since we are giving so much prominence to subjective time, let us consider the concept carefully and define it as intrapersonal experience of objective time. A very important characteristic of inner time is a person’s subjective estimate of how much time passed while he was engaged in something and not looking at his watch. There is a lot of evidence to indicate that perceived (psychological) time and objective (chronological) time do not necessarily coincide and that, moreover, depending on his emotional state, a person may underestimate or overestimate the length of time which has elapsed by a factor of several times. Chronological time flows evenly, so what is happening to it in the human mind? The greatest intellects have grappled with this question, beginning with St Augustine who associated inner time with a distension of the soul, and continuing with Immanuel Kant and Henri Bergson. The same interest permeates the writings of Marcel Proust, James Joyce, and others. Not only poets and thinkers, but all of us occasionally catch ourselves thinking that time is flowing anomalously and that our inner chronometer is out of sync with the hands of the clock. When you are eagerly anticipating something, time drags agonisingly slowly, and if you are engrossed in something it flies by.
There are well known experiments confirming that this inner sense of time is no illusion but an innate human ability. It is a sixth sense (or seventh if you add the sense of balance to the usual five). It is a universal characteristic and everybody’s inner clock can speed up or slow down. This is particularly noticeable if there are no external signals to confirm the passage of time. Test subjects lose their sense of time if they are in caves or immersed in salt baths which deprive the sensory organs of signals. The same effects are observed in subjects under hypnosis, suffering sleep deprivation, or taking narcotic substances. At the end of the nineteenth century it was discovered in Wilhelm Wundt’s laboratory that certain people systematically underestimate the duration of time while others overestimate it. It has also been observed that the perception of time is dependent on the nature of external stimuli. Visual signals of exactly the same duration appear longer than auditory ones. If a period of time is divided into short intervals, it is perceived as having been extended. Short intervals lead to overestimation and long intervals to underestimation. Such experiments are studying the basic human ability to determine duration, and test subjects need to have a balanced state of mind. Emotions are seen as flawing the experiment.
We, however, are precisely interested in the perception of time in different psychological and emotional states. Subjective time or, as it is called, psychological, inner, personal, or intrapersonal time, differs both from objective chronological time and from the time registered by our biological clocks responsible for the functioning of the vital organs. These second and third kinds of time are not considered here, and we will say nothing about social time. Depending on one’s state of mind, time may be experienced as flowing smoothly or advancing in leaps and bounds, as compressed or extended, full or empty. It proceeds unevenly. It is irregular, multi-faceted and manipulable. None of these features applies to chronological time.
The fact that perceived duration of time correlates directly with the subject’s psychological and emotional state and the kind of activity he is engaged in can act as the starting point for some extremely interesting research. A powerful motivation is the feeling that subjective time is somehow associated with the mysterious quality of human existence, and the hope that time might somehow be tamed. Indeed, if perception of time can vary widely, how can we contrive to extend the boundaries as much as possible? To rule time would be in some measure to surmount the finite nature of individual existence. Like all living things, human beings have an inborn aversion to death, and this is augmented by a culture which elevates the value of life to an absolute. It is not difficult to guess why culture does this, but the thought that this highest good is not bestowed in perpetuity can truly poison human existence. Other cultures had different attitudes to the matter of life and death. The culture of ancient Egypt, for example, did not present the afterlife as being particularly scary. Despite the fact that our attitude to death is dictated by nature, the human mind seems close to understanding how to mitigate or altogether do away with the fear of death.
We need to rebrand the idea of life, shifting the emphasis from the nominal to the subjective duration of time. If we can move from thinking about gross quantity to net quality the problem will be half solved. The need is to move the hands of the clock from objective time to inner time, and learn to slow them down.
This is possible because the sensation of time is directly dependent on the experiences with which it is filled. Sergey Rubinshtein formulated the “Law of the emotionally determined assessment of time”. Time, filled with events with a positive emotional connotation, is experienced as condensed. When filled with events with negative emotional connotations it is extended. At first glance this looks very unpromising. What is so great about the fact that positive time flows by quickly? Indeed, if the quality of time and its duration are in inverse proportion, then what you gained by increasing one you would lose by decreasing the other and would end up with no net gain. Fortunately, quite the opposite happens. How long a life lasts subjectively depends on how it is remembered, on post-factum rating of its fullness and duration. William James, the psychologist and founder of pragmatism, said much the same as Rubinshtein: “In general, a time filled with varied and interesting experiences seems short in passing…”. However, James immediately went on to perplex the reader with a paradox: “…but long as we look back. On the other hand, a tract of time empty of experiences seems long in passing, but in retrospect short”. Thus, perception of duration during the process is the exact opposite of what it is after it concludes. A person engrossed in what he is doing keeps no track of time, and accordingly it flows by rapidly. When he comes to recollect it, however, it plays back in his consciousness as having lasted for ages. Similarly, if someone has spent time empty of experiences, instead of living he has merely suffered time’s unbearable slowness. He has nothing to remember other than his own wretchedness. Now everything has been turned upside down: subjective duration is a function which increases in line with the quality of time. Time well spent generally impresses itself in memory as having been full and long lasting, and its contribution to the money box of life has been substantial. As we noted earlier, the integral of quality personal time can be seen as quantifying happiness. Now we can see that it simultaneously indicates subjective duration, which is closely connected with the perceived quality of time. Accordingly, maximising the integral means winning the race against the time nature has allotted us. In this light, living two or three times more intensively is tantamount to living the same amount longer and with better quality. On the other hand, to live with zero intensity is close to not living at all. In other words, in order to live a long life, we need to fill the years we have as fully as possible with experiences. This process is something anyone can manage since no abnormal sensitivity is required to recognise the interconnection between the emotional colouration of time, its intensity, and its perceived duration. Most people are capable of this, which means that if they so wish they should not find it particularly difficult to master this kind of self-regulation.
The link between time and the emotions has been confirmed many times experimentally, and also by clinical research. Manic patients greatly underestimate the duration of present time, while people in depressive states overestimate it. The experience of time differs markedly between well-adjusted and maladjusted adolescents. For the latter, the passage of time is empty and unengaging. They look to the past, not the future. The same is observed in people who are chronically stressed.
From the above we would seem to have arrived at a handy recipe for increasing the effective duration of life: we should just do all we can to fill our time and to keep our mind and soul busy. Unfortunately, this is not a practical recipe but mere wishful thinking, which it will remain unless we can devise a way of bringing the whole idea down to earth by proposing a generally accessible method. Psychology explains a lot about subjective time, but gives no instructions on how to optimise it. At this point we can turn to the concept of personal time elucidated by K.A. Abulkhanova-Slavskaya. She advises that the recipe for true longevity is to live “deeply”. She talks about the “quality of personal time” and links it to self-realisation. This may sounds like a truism, adding nothing to the familiar concepts of “having a good time” and “quality of life”. As soon as a person begins to track subjective time and register their attitude to it, however, the situation changes radically. The term “quality” takes on a specific meaning and practical sense. In order to advance to the management of personal time it is necessary only to distinguish gradations of quality, to document them and be guided by those standards (as yet undefined). These elements are not included in Abulkhanova–Slavskaya’s concept, but her conclusion is that we need to give priority to timeliness, by which she means filling time in the best way possible in the given circumstances. (Let us note that this wish is formulated in full accordance with economic logic.) The problem is that it is no easier to define a measure of timeliness than to define the parameters of quality. Merely replacing one clarion call with another will not move us forward. A separate issue is indicators of human time: they need to be simple and to fit readily into everyday routine. What is the point of characteristics which can only be measured in a specially equipped laboratory? Of course, neither is it any use proposing indicators unless we can persuade people to use them.
In a word, what is needed is regular rating of the quality of subjective time and an easy way of registering the results. Happily, neither of these is impossible. It is as natural for a person to register the quality of time as to breathe. It is an innate (and trainable) ability, as is indicated by the millions of ratings in various collaborative systems. The procedure does not require any special effort on the part of participants, nothing beyond what they are already accustomed to doing on the site when they post ratings. Let us not forget that by quality of time we have in mind a generalised emotional distillate of everything that fills time: experiences, reflections, insights, calculations, meditations, creativity. This characterisation is unnuanced and conveys only the vividness of an emotional colouration, positive or negative.
How differentiated should the scale of measurement of quality time be? In theory it should, on the one hand, correspond to people’s ability to distinguish gradations of quality, and on the other provide an optimal balance between the intensiveness and precision of computer calculations. From the experience of recommendation sites, a five-point scale is not really sufficient. A ten-point scale is (and in the specialist literature one will find arguments in favour of a seven-point scale). If it is to become a tool for personal self-management, the rating of inner time should become a habit and occur almost unthinkingly. A good way of achieving this is by combining it with some routine activity, and here there is nothing better than the third-generation social networks where reflecting on the quality of time dovetails nicely with satisfying a whole range of other needs, such as navigation, self-presentation and socialising. It remains only to recognise this and start making ratings. Accordingly, to master the practice of managing subjective time it is necessary to: (1) make it a rule to rate the quality of personal time; (2) register the results in numerical form.
Although some of what is said here about subjective time is a matter for the future, managing it is not something unprecedented. Each of us is conscious to a greater or lesser degree of the potential value of our time, and to the best of our abilities we maximise the return from it. Everybody has a different planning horizon, with some people maximising their pleasures within the limits of the next half hour while for others it is decades. Some forms of activity are accepted as mutually complementing each other, with neither having to be given priority. Undemanding reading matter admirably complements the pleasures of the beach, while an intellectual who has worked hard need not be ashamed of preferring a low-brow movie to a profound and serious masterpiece, simply because at that particular moment he does not have the emotional resources to engage with the masterpiece. Having a rational attitude towards time does not mean squeezing the maximum out of every moment. That is just as prodigal as running full tilt in a long distance race. It is also fairly obvious that in the longer term the optimum will be to maintain a balance between activity creating the preconditions for future quality time and activity which gives an immediate return. Certain occupations are on the borderline, for example, gaining knowledge and, more broadly, consuming information. There is an obvious parallel with the way factories maintain a balance between preparation and actual production. When allocating time to various types of activity it will make sense to build up a balanced portfolio the way investors do. This should include “growth stocks” so that, when earlier activities begin to bore you, you have something to switch to. To conclude this list of helpful advice from the author on how to live your life, do retain a sense of moderation and do not get caught in a treadmill of time optimisation so that agonising about that does not mar the time it is supposed to improve. There are other fairly straightforward techniques for time management, but for most people they would not be acted on until a clear, systematic method has been devised.
The category of quality time allows us to understand a number of phenomena which are otherwise difficult to explain. Many people are puzzled when they observe notably carefree people totally devoting themselves to business. If we put to one side the suspicion that their supposedly Herculean overworking is a myth, their improbable busyness really does look to an outsider like Aristotle’s chrematistics, that is, making money for the sake of making money. In reality however, as we can now understand, for certain types of personality this is a way of maximising the quality of time. The position of the head of the firm, apart from its obvious material rewards, gives scope for freely managing time and personal resources. The leader himself determines the pace of work and play. He chooses when, with whom, and under what circumstances he will resolve particular issues; which of his pet schemes he will realise in practice; and what degree of dependence on the will of other players he is prepared to tolerate. If a person is possessed by a desire to organise his world so that within its confines everything should operate in accordance with the rules he establishes, no amount of prosperity will persuade him to stop.
A capitalist who has become an addict of this kind of motivation is disinclined to allocate more than a modest percentage of his income to personal consumption. All the rest is ploughed back into his business and, in most cases, serves to benefit society. This is the moral justification of capitalism and altogether a strong argument in favour of it which seems never to have been clearly expressed. It seemed sufficient to claim that free capitalist competition between individuals pursuing egotistical aims works for the benefit of society as a whole. In fact, however, the benefit derives not so much from competition as from the fact that it goads the capitalist into reinvesting income in the business. De jure, the capital belongs to a private individual, but de facto that individual is stewarding it on behalf of the other members of society. Whether people are aware of the fact or not, entrepreneurs slog away not merely for their own benefit, but for the good of society.
The capitalist system concentrates capital and resources in the hands of those most capable of managing them. What the capitalist maximises is not money for his personal consumption (as far as that goes, the margin of utility is soon reached), but the opportunity to be his own master and the quality of personal time, a condition of which is the economic power he has acquired. Freedom for self-realisation and the scope for large-scale risk-taking are goods of which people never tire. As regards material freedom and physiological comfort, these, like any utilitarian goods which have become habitual, are perceived as no more than what is due. The least encroachment on them, however, will produce a sharply negative reaction and oblige the capitalist, regardless of his overworking, to rush to the defence of the status quo.
Managers one step down are already restricted in respect of this kind of freedom and beholden not only to circumstances but also to the management above them. They give up part of their freedom in return for guarantees and support from above. Those at the base of the pyramid of hired labour usually have very little freedom, but that does not deprive them of opportunities for self-realisation in their profession. It follows from this, inter alia, that Nietzsche’s notorious Will to Power, which has generated so much philosophical discussion and raised such a hullabaloo in Russian society, is merely a particular instance of a strategy aimed at acquiring quality time. This demystifies this human characteristic. In no way is it an inborn trait of human nature, as a familiar doctrine insists. The will to power is explicable as an aspiration to personal freedom, to be able to occupy yourself as you please, when and with whom you please. Power over others is a means of enabling a person to regulate relations between himself and his inner time. It is not the will to power but the quality of time to which everything can be traced back. These are the terms in which it should ultimately be interpreted. The primal principle on which any society is organised is the erection of a vertical hierarchy, and this too is associated with people’s aspiration to control time. To some degree that is why they climb the social ladder in the first place, in order to be able to impose on those below them the calendar they personally desire.
Recognising that inner time can be both positive and negative enables us to clarify a number of issues, the better to understand the processes of the New Economy. Choice, as we have seen, comes at a price. When that was realised, an amendment was made to the idea of classical economics that man was an ideal maximiser. It was seen that, when choosing, a person will eventually finds it makes better sense to stop at a satisfactory option rather than go on seeking the absolute best, as there is no guarantee that the additional search costs will be recouped. As Simon put it, the search continues until the additional marginal costs outweigh the anticipated gain from the search. This notion can be further refined by adding that we need to take account not only of time and psychological costs, but also of the value of the time spent in choosing.
That process may itself give pleasure, and then the costs should be reduced by that amount and the search continued for longer than would be appropriate if it were not the case. On the other hand, the search may be exhausting and disagreeable, and then it would make sense to reduce the search time. The desire to avoid negative time spent on searching and anticipation means we need to be less picky. It is on this attitude that markets in which information is not transparent, including the cultural markets, often prey. The consumer agrees to practically the first thing he is offered, just in order to terminate disagreeable search time as quickly as possible. The burden of negative time spent choosing pulls down the average level of the quality of time lived.
3.3.2 Research into Emotional Dynamics
Subjective time is associated with the emotions, and these have been little better studied than time itself. It would be good to have a clear picture at our disposal of the development of emotions, but for the time being we have to make do with very general, mostly artistic, representations. If the absence of knowledge about subjective time can be explained by its being overshadowed by time in general, emotions, on the contrary, are in full view. The preparatory work has already been done by psychologists. Theories have been proposed, emotions classified. It would be good now to embark on detailed study of their dynamics, to see how one succeeds another, how they unite and form stable combinations. Research in this area would have a clear practical application. When we are talking about happiness, itself regarded as an emotion, the role of emotions in a person’s inner life is clearly very important. In the near future we can expect to see the emergence of markets and of technologies associated with the management of emotions. So far, though, as in the study of other dynamic processes, the social sciences have had little success.
Even without research we can state with confidence that every emotion has a beginning, an end and a characteristic duration (which will be different for different emotions). We know, for example, that the acute phase of grief following bereavement lasts around 6 months (Vitis Vilyunas). As regards other emotions, we have only a modicum of knowledge about how they develop, and in particular about their duration. We may surmise that for each personality type a particular order and range of emotions is desirable, and that departures from this ideal will have consequences. A person needs to undergo certain experiences at particular times, but circumstances sometimes prevent this and cause a strong, often unrecognised and unsatisfied demand for situations which would enable harmonious emotional states to develop. If there is a shortage of certain experiences, a person’s emotional life may be disfigured, deviate from what is optimal, and be in need of correction. The individual tries his best to overcome the deficiency by seeking or even unconsciously setting up situations to provide an outlet for his emotions. Long suppressed irritation, for example, may burst out entirely inappropriately over some trifling matter.
At this point it is worth turning our attention to one of the most pleasing but neglected attributes of art and the entire realm of the aesthetic. Art is a universal means of regularising the emotions. Aesthetic culture is a factory which manufactures experiences. Its storerooms are bursting with emotions preserved in texts, images and sounds (as well, of course, as with plenty of stuff which is past its sell-by date or was never much good in the first place.) Delving into these storerooms, a person can live a kind of other life which suffuses and expands his own. At the same time he enhances his emotional repertoire and learns to empathise: he sorts out his own “emotiogram”. These storerooms have the very valuable characteristic of being freely accessible, enabling a person to take advantage of them at a time to suit himself. From this point of view the very existence of culture assumes special significance.
As with subjective time, collaborative practice also has a role in the studying of emotions. This would require a facility for registering emotions when rating works or other objects. On a software level this is not difficult. Some people will rush to see it as a further step towards mechanisation of the soul, the pharmacy issuing emotions on prescription (or without). I see no genuine threat here, especially since the idea is unlikely ever to be fully implemented. This upgrade to the cultural supermarket might even be welcome. What is wrong with delivering works in the light of demand for particular emotions? A similar service has been provided for years by music sites which will select music for a particular mood, and so far there have been no fatalities.
Although in the present volume inner time and emotional dynamics are considered only as they relate to collaborative practice, research in these areas obviously cannot be so restricted. Other approaches will need to be brought in, among them computer-assisted linguistics (semantic analysis of the blogosphere), computer-assisted tomography, neurobiology, research into sleep and dreams. A special place in this research should be reserved for music, which is based on perception of durations and is, of all the arts, the one most closely associated with subjective time.
3.4 The Measure of Symbolic Exchange: Second Money
Having introduced the concept of subjective time we can now move on to the already mentioned matter of a measure for the symbolic. Until now this fundamental problem has unavoidably been put to one side, but all other attempts to devise a system of measurement in the humanities tie in with it. What is this measure? How, for example, is the value of a blockbuster to be compared with that of an art movie? Such traditional indicators as star ratings and box-office takings are, as we have seen, pseudo-measures of symbolic value. They are more likely to mislead than to enlighten. Who cares if some smash hit watched by millions has pulled in ten times more time and money than a movie with an audience of 100,000? We simply do not know the cultural impact of either of them, emotional, social, educational, developmental or other. Quite possibly that matters more than the difference in the size of their audiences. But then again, perhaps it doesn’t. Within one system of coordinates art movies are more valuable, within another blockbusters are. No matter what may be said, no agreement will be possible before we start using the concept of the quality of inner time.
A work of art’s impact is determined by the quantity of associated quality time, which theoretically depends on the number of moviegoers (recipients), their level of cultural capital, and the strength of the impression made on each of them. Alas, the two latter quantities are completely unknown. Even points ratings indicating the degree of satisfaction do not help much because they do not contain the information needed to enable us to correctly correlate, for example, 100,000 viewings at an average rating of “6” with 10,000 ratings at an average of “9”. Bearing in mind that the starting point for a positive reaction is “5” (the so-called semantic zero in the 10-point scale), and taking account of how far ratings deviate from it in either direction, it would seem possible to even out the difference in the numbers of ratings. Alas, this approach, despite its obvious arithmetical neatness, gets us nowhere. The problem is that we do not know what psychological distance separates a “9” from a “6”. The points on the scale denoting the force of the impression made are not equidistant. For example, the jump from “7” to “8” may correspond in subjective perception to an increase two times, even three times, greater than a rise from “6” to “7”. We do not know. We can only say that if we had data about the true state of affairs, we would obtain a steeply rising curve, but exactly how steep we cannot say. There is no certainty either that gradations of satisfaction are identical in different sectors of consumption. It is logical to suppose that they differ. In short, the points from a satisfaction curve are turned into a logarithmic scale, and we can say nothing more definite than that. It seems reasonable to ask whether people would prefer five “7” movies or one with a “10” rating, but for a number of reasons it is quite impossible to establish this, and even direct questioning is unlikely to shed any light. From the statistics accumulated by recommendation sites we can see that there are between two and nine “8”s for every “9”. Given that range of dispersal it is impossible to establish a norm. The only thing that is clear is that this correlation is associated in some way with the short supply of high-quality movies, and that cinemagoers are unable to find them when they want them. It also seems to indicate that people need something light and relaxing, something to pass the time, and that plays into the hands of “7”-rated movies.
Our example is taken from the cinema, but it would not be difficult to find any number of other illustrations of the limited reliability of points. For example, it would be absurd to compare the points awarded to a joke, a video blog, or a ringtone with ratings of major art forms. We are dealing here with categories of entirely different weight. Points are uninformative not only when comparing different kinds and genres of art. For example, some reasonably well edited entertainment movie will have little trouble earning itself an “8” in the context of similar products with no particular pretensions. That same “8” will be awarded to an outstanding movie which has won it in the heavyweight category of major art. Obviously, these are “8”s of differing magnitude, but the points system blurs the difference. Despite the fact that ratings do not reflect an absolute value, they are better than nothing. In many cases, where all that is needed is a straightforward hierarchy, the points system works reasonably well. It works, for example, for collaborative filtering or in the awarding of examination marks in the education system. It is incapable, however, of producing absolute measurements.
If we can’t use ratings, box-office takings, or points, what can we use as a measure of symbolic value? The fact of the matter is that right now there is nothing suitable. However, the fact that we don’t currently have a measure does not mean that one could not be created. The measure of the symbolic could be gratuity payments, that is, ordinary money used in the extraordinary function of post factum voluntary payments. The more donations an item collects, the higher its value. As this method of expressing satisfaction through monetary supplements becomes the norm, so values will be given numerical expression and it will be possible to rank them in order of the amount of money they attract.
The logic behind this solution is that symbolic value (applied to works) or symbolic capital (applied to artists) become visible through their ability to engender quality time. The measure of this ability can be gratuity money, as that is what reveals the subjective attitude of recipients to value. The fact that post factum payment (which is crucial if measurement is to be possible) is relatively rare at present should not put us off. This is learned behaviour. There are many signs that donating money is becoming normal and likely to grow in the near future. In the first place, a kindred phenomenon is already showing a considerable head of steam, in that education, medicine and the protection of nature, for example, are largely financed privately by charitable donations, and indeed the purchase of branded goods at thrice the price is itself not far removed from covert voluntary donation. In the second place, we know that in the past culture existed largely on the basis of patronage. It seems entirely natural to re-introduce this model, at least in part, in the latest spiral of history, with the difference that major donations will be replaced or supplemented by popular micro-patronage on a massive scale. Supplementary payments have an entirely utilitarian motivation, which also stimulates both producers and users. If we free ourselves of the prejudice against the “mercenariness” of money, it is easy to see how close a gratuity scheme is to the spirit of human relationships.
Now that we have a claimant to the role of a measure of value, let us support it by showing how the measurement might work in practice. At the same time, we will see that it is the only approach possible.
It is generally accepted that the only way of measuring the value of goods is in relation to other goods. It cannot be inferred a priori and becomes visible only in exchange, with each side pursuing its own advantage and unwilling to give more for the good than it is judged to be worth. But what if products are not directly bartered for each other? If there is a good which participates in all exchanges, value can be expressed in units of that good, and in practice it is money which is the good universally exchanged. In the symbolic field, however, money is a bad instrument of measurement because it does not have stable purchasing power. An incontrovertible sign of money’s failure in this realm is uniform prices for entirely different digital products. Books, movies and sound recordings all cost the same, irrespective of their content or quality. For the symbolic we need some analogue of money, but one which corresponds to its needs and specifics. We need first to establish what the features of this second money should be, and then we will see how to arrive at it. Let us first think about the standard market measures of cost. What are the qualities of exchanged objects and what are the procedures which make quantitative evaluation possible? Knowledge of the inner workings of this mechanism will help us to discover second money.
In the first place, to measure value we can use only something which is itself a limited resource. At least, it should be in short supply with the parties to the exchange, since otherwise the claimant to the role of the means of measurement would be exchanged for a needed good in arbitrary amounts, and would reveal neither the true need for the product nor serve as a measure of its value. A person has no shortage of rating points (except in rare situations where they are artificially limited), and accordingly points cannot serve as an instrument of measurement. In the second place, although value does exist in human minds, it is no easy matter to extract it from there. Even the individual himself will have difficulty estimating in purely intellectual terms how precious a particular object is to him, and to give a number to that value. He needs guidelines for comparison. In just the same way, it is difficult to gain a sense of the intensity of human sensations, experiences and motivations until they become manifest in decisions and actions. Value becomes objectivised in exchange and in no other way. Its primary indicator is the human individual, a kind of testing probe. Guided by his feelings, a person decides what quantity of a good in short supply (money) he is prepared to part with in exchange for a particular value. This decision cannot be taken in isolation, independently of other deals, and without their all taking their place within the context of the budget at the individual’s disposal. People can feel their way towards the monetary equivalent of value only through the regular practice of allocating their budget to categories of essential expenditure. Hence the third, least recognised but no less important, condition of the functioning of the traditional monetary system is the uniformity or repetitiveness of deals. In order to allocate the budget knowledgeably, the individual needs to have an idea of the value of particular goods, and for this he needs first to have experienced them. In addition, it is desirable to be able to estimate how well the selection of goods complement each other. Underlying more or less firm prices is the repetitiveness of acts of consumption. Providing there is uniformity and openness of dealings, the market mechanism of supply and demand reduces numerous private, subjective assessments to single prices. That is, it produces intersubjectivity of assessment. Quite soon an average (“fair”) price emerges which serves as a guideline for the parties. In the absence of repetitiveness and openness, for example when a good is constantly changing or deals are done in secret, the market is incapable of measuring value. If it does, there will be a wide margin of error.
Let us analyse symbolic exchange in the light of the three preconditions for monetary measurement listed above. Their presence will indicate where we can simply follow the logic of ordinary money, but their absence will oblige us to consider modifications.
At least we do have one essential element, and that is a subject capable of judging value. Things are less easy in respect of everything else. As we have said, goods bearing value which is to be measurable by the market must be scarce, and the subjective need for them derives from past experience. The symbolic does not have this quality of scarcity. It is released into the air, it is on open access: on the internet, in libraries, in the ether and, with few exceptions, it is as free as air. (The exceptions include anything which has a material shell which is difficult to reproduce, for example theatre and graphic art.) This of itself virtually rules out monetary measurement. Unlike material goods and services, the consumption of symbolic products by one person does not deprive others of the same opportunity. Indeed, it is often case that the more extensive the circle of consumers, the better. The product may be present in unlimited quantities and is not removed from subsequent consumption, so where is its scarcity? One can artificially bar access and then demand payment to restore it, but given today’s facilities for copying and distributing content that is already almost impossible and likely to be even more problematical in the future (unless the content producers manage to lobby through wholly draconian measures against private individuals who infringe copyright). Symbolic products are not competitive and cannot be excluded from consumption. This is symptomatic of social goods which do not have to be paid for and which accordingly are at the mercy of freeloaders. Since symbolic exchange does not require strict observation of parity of benefit to the parties, the motive to track the proportions of the exchange disappears, with the result that no measurements are made. Conventional exchange reveals value precisely through the mechanism of firm recompense.
Let us abandon the dead end approach through scarcity of the symbolic product and try a different angle. Let us seek what could theoretically serve as an instrument of measurement in the symbolic realm. Clearly this should be universal and also in limited supply, since otherwise we will find ourselves trying to use a ruler with a movable scale. Perhaps time will serve? It seems a logical partner for the symbolic, no worse than money for the material. Time is plainly a limited resource and people have a fair idea of its value. Consumption of the symbolic involves the spending of time, and it should be possible to judge the value of particular products from the quota of time allocated to them. Alas, the idea of using time as an instrument of measurement does not survive closer scrutiny.
If the anticipated utility of each cultural good was based on experience, then allocating part of the budget of time to it might possibly tell us about its value. The newer an information product is, however, the less predictable is its effect, and the more predictable it is, the less is its effect. The undoubted fact that the non-repetitive nature of symbolic goods hinders calibration of value using time is only half the problem. Time has another fundamental defect which puts an end to its candidacy. Of the two requirements a measure of value must meet, that it should: (a) be limited and (b) a valuable resource, time meets only the first. It is always limited, but by no means always valuable. The inner psychological value of periods of time of equal duration is highly variable. The real snag about measuring by the hour is that willingness to spend time very much depends on the value of what is laying claim to it. Lacking a stable value, time lacks the consistent purchasing power characteristic of money.
Worse is to come. Time nullifies its purchasing ability every second, urging a person to use it without delay and irrespective of the price. You cannot save it, cannot deposit it in the bank, cannot make it grow. It is spent irrespective of whether you intend to spend it or not. The impossibility of putting off the act until a more suitable moment devalues time and disqualifies it from being used to as a measure of values. If something analogous were to happen to money, that is, if it ever encountered a negative interest rate, everything in the world would change. If money burnt your hands, a person would throw it away on the first good that came along just in order to be rid of it. In such a situation there is no incentive to check parity or to weigh decisions. Information theory explains money by the need to use it to take a pause in exchange deals, giving one the opportunity to choose the optimal variant. Money mediates exchange, splitting the transfer of values from one set of hands to another into the two stages of buying and selling, and one can wait out the interval between them by using money. Time cannot function as an exchange intermediary because in the course of the exchange it will have been permanently reduced.
Accordingly, there is a void on both sides of the exchange equation. Measurement is the comparison of two scarcities, but there is none on either side of the scales. In one there is information, the value of which is to be measured but which is tending towards being a free good. On the other is the time which is spent on consumption of the information and, although in principle it is in short supply, that is not the attribute it is showing here. To correlate the one with the other is like to trying to balance two vacuums.
Labyrinthine as the maze may be, there is a way out of it, which we will find if we view symbolic exchange as a means of transforming an individual’s calendar time into quality subjective time. Speaking in purely economic terms, a person should allocate to the perception of various kinds of information such quotas of time (also of attention and other personal resources) as will yield a maximum of well filled and high-quality subjective time. This is not information in general, but the subjectively valuable part of information which engenders quality personal time, and that is the scarce resource we are seeking. Optimising time means (a) to use it in a timely manner; and (b) to fill it with valuable and valued content. This can be realised if content of the requisite quality is always to hand. Then, when a window of time appears, there will be options available with which to occupy it.
Although by and large there is no shortage of content, as soon as we start talking about the part which is of personal significance to a particular consumer, the shortage becomes as acutely perceptible as information about the quality of content is meagre. The access to value is obstructed by heaps of trash which has to be shovelled aside before one can reach anything deserving of attention. Since subjectively high-quality information has scarcity, its value should become apparent in the process of discovering it. This is precisely what occurs in a collaborative system: the second money it activates, and all the rest, serves as payment for high-quality information, including data on the quality of the information itself. In accordance with the collaborative principle one may not obtain a prediction without expending resources on creating one’s own profile, in the course of which the operation of subjectively weighing up value is performed which finds quantitative expression in post factum payment.
When a large number of people reveal their view of value, that is itself a process of measuring it. It is essential, however, that this should be accompanied by the expenditure of resources which belong to them. The scarcity which is a condition for measurement of value is not where you would usually find it, not on the supply side with the necessary material resources, but on the side of consumption and, accordingly, of personal resources. This process of collaborative assessment of value is in some ways analogous to what occurs in conventional markets. By making purchases, a person satisfies his basic needs and only in passing, and in most cases without realising it, participates in the measurement of value, influencing the price. In our case, a person is striving to optimise his choice as a cultural consumer and also thanking the creative artist. To do so he performs a number of actions, a side effect of which is measurement. In return he receives indispensable navigation assistance. In this way, when measuring symbolic values the same conditions are observed as in the utilitarian sphere: the process is linked to many acts of exchange; it serves to optimise consumer choice; and additionally it is an incidental but inextricable part of exchange.
The idea of measurement by means of second money can be explained in another way, by making use of the concept of “consumer surplus”. This is how economists designate the difference between the sum paid in accordance with the price list and the amount a person would in principle be willing to pay if he knew in advance the utility he would obtain from his purchase. The sum of consumer surpluses, if it were published, would signal symbolic value. Although people are normally disinclined to reveal information which might be used to their disadvantage (for example, to raise prices), in the case of collaborative filtering the situation is the reverse: it is advantageous to communicate information, and this leads to personal gain.
calendar time: a resource rationally expended as a result of collaborative filtering;
quality subjective time: increasing this is the aim of optimising the use of calendar time;
second money: ordinary money but used in an extraordinary manner, under a system of post factum payment. It serves as a measure of symbolic value, and builds the user profile which is essential for efficient navigation and access to values. Underlying measurement are subjective opinions objectivised through voluntary payments. The latter cannot be avoided without causing damage to the user profile;
collaborative filtering: a means of rating and predicting the quality of information, and simultaneously a pledge of conscientious rating/value judgements (without which the user will be ejected from the group of recommenders).
However complex the system which conjoins collaborative filtering and second money, this is the solution to a problem which until now was considered insoluble. Moreover, everything relating to filtering has already been tried and tested. It is worth remembering that until quite recently this evoked no less scepticism than retrospective donation schemes of recompense do today. We believe that the business model of trust advertising will shortly help to overcome this scepticism, since it employs second money and is a hot topic. Without it no way can be seen of resolving the acute problem of monetising content production (as discussed in Chap. 2).
And indeed, the method described for introducing it is not that difficult, bearing in mind the extraordinary nature of the task and the fact that everybody had lost hope that any solution existed. We are hardly going to measure talent in the proposed currency of “gauguins”. After extensive and fruitless searching it would be naive to expect a simple, one step solution. If there was one, it would have been found long ago. This did not occur because certain essential components of the solution were missing: post factum micro-payments and collaborative filtering. Today they have appeared, but to implement micro-patronage there is no escaping the need for major institutional changes.
The origins of second money deserve to be commented on separately. It has long been recognised on an intuitive level that culture needs its own regulator, in some ways analogous to money. Everybody involved in this realm has overtly or covertly suffered from the lack of a tool which would enable them to compare non-material values with each other and also with material values. Since ordinary money was obviously not up to the job, a need was felt for “cultural money”. Although the need was generally recognised, it was also obvious that any attempt to create it ex nihilo was doomed to fail. It was unclear, however, what the alternative was. Collaborative practice led to a situation where there was no need to invent it. It would be enough just to upgrade the use of existing money by introducing a retrospective scheme of payment for experience goods as the norm. Post factum gratuity payment would reflect the sum of consumer satisfaction. These extra payments can be implemented both within the collaborative system or outside it, as already practised on a mass scale by people giving financial or other support to initiatives they care about. Until these actions are included in a collaborative system, second money will not be able to perform its measurement function. Within the collaborative filtering option post factum payment is only partly voluntary because group stimuli and sanctions operate. Such nominally voluntary payments we call “second money”, and that is where the future of symbolic exchange lies.
3.4.1 Multifunctionality of Second Money
Being able to measure the symbolic will cause a restructuring of all the markets and spheres of life involved. Until then, the symbolic realm will remain on the periphery of the empire of money, and money will assist in plundering the territory, like a colony, giving very little in return. Second money gives culture a chance to regain its sovereignty. As we have noted, one of the imminent metamorphoses is associated with trust advertising. There are other business innovations which promise major change in many markets: unmediated distribution with payment directly to the producer, and user certification of quality. The practical effect of the retrospective use of money will make itself felt only when it achieves the magnitude of an institution. Point-scale ratings in a recommender network are a step in this direction and through them the economic sense of post factum transactions will become clear, namely raising the efficiency of individual consumption. When the rating function is integrated with the function of post factum recompense to the creative artist, the points system will either be replaced naturally or supplemented by second money, bringing with it fully fledged monetary measurement.
Information economics theory tells us that money arose from the need to optimise exchange, to simplify the path of goods in hand to the person who needs them. Before money, this transition involved a succession of barter exchanges. A person organised a combination of preliminary exchanges in order to obtain goods which his opposite number, who possessed a good he required, would accept as payment. Money came into use primarily because it shortened this procedure, reducing it from many moves to two moves. Money reduced exchange to the two standard operations of buying and selling. The main idea behind money is that it makes possible the separation of an exchange in time and space. In order to do so it needs to be a transitional good with the power of universal exchange. Subsequently, two other functions were added to these initial functions of money as a means of circulation and measurement (which had enabled trade and commerce to get on their feet). It became also a means of payment and saving, as a result of which it became an inalienable part of the vast majority of transactions. The dazzling career of money is an example its offspring, second money, should imitate.
Second money will flourish on the same basis as brought money into use, the uncoupling of exchange. Whereas conventional money optimises material exchange, second money enhances symbolic exchange. As payment for information about the subjective quality of information, and making it possible instantly to choose the best options for expending time, it raises the liquidity of time. As already mentioned, an important factor is that this new measure does not arise in a vacuum but emerges from the already existent institution of money. A further advantage is that second money performs other functions, of which we can mention at least three: (1) payment for services of consumer navigation; (2) recompensing creators of products; (3) measurement of symbolic values. There is also the function of saving, or at least fixation of symbolic capital, discussed below. Second money is thus multifunctional, which was the very quality underpinning the vitality of conventional money.
The more diverse the applications of second money, the more straightforward its progress will be and the greater its chances of gaining acceptance. Accordingly, let us take a look at some other practical examples of the use of second money.
3.4.2 Monetising User Activity in Third-Generation Networks
When second money enters collaborative practice, the question will arise of whether to restrict the size of post-factum payments. If they are not regulated, and there seems to be no reason why they should be, people will be able to donate however much they consider appropriate, irrespective of normative frameworks which are difficult to define to everybody’s satisfaction. It will be found that, for a given level of satisfaction, well-to-do people are able to transfer more for a given product than those less well off. Accordingly, the effect of wealth will make itself felt. This will, on the one hand, complicate the game, but on the other it will offer some interesting possibilities. Despite people’s anxieties, amounts of different value will not particularly inhibit the work of the collaborative mechanism, since proximity of tastes does not imply absolute coincidence of user profiles. It is enough for them to be similar, and the computer algorithms are capable of detecting just that. However, in measuring the symbolic capital of goods and services the effect of wealth does need to be taken into consideration because it is proportionate to the sum of gratuity payments. Otherwise, we will find that if we take two works, one of which has been liked by people who are not well-off while the other has been equally liked by prosperous people, the former will lose out solely because its admirers have less money at their disposal for expressing appreciation. Does this mean that it has produced less effect on their souls and is less valuable? Probably not. If the differentiation of preferences does not correlate with the level of income or the same works are liked by rich and poor in equal measure, this distortion will not arise. However, it is not worth trying to calculate everything while “standing on the riverbank”. As soon as the system gets working, answers will be found to such questions. What we can say with all certainty at the present time is that monetary capital will be pumped over to symbolic capital and vice versa, and the result will be a better balance of each.
Post factum supplementary payments give solidity to a personal profile relative to a profile constructed solely on the basis of points. The actions of freeloaders underpaying for works they have liked could corrupt the system, but as a consequence the freeloader would incur costs in the form of poor quality recommendations and being ejected from the group. He faces the prospect of an inglorious demotion. Such is the basic defence of the collaborative system against parasites. Other defence mechanisms exist, based on the business motivation of users. As the experience of recommendation sites shows, a collaborative system can function effectively on points. At least, it can until it treads on the toes of affected industries by discrediting part of their output in the eyes of the public, which until then had been conscientiously paying up. At which point the industries will decide to smash the site. With the introduction into the system of money, recommender activity acquires its own business model whereby creators of information are rewarded in proportion to their contribution. A portion of post factum supplementary payments is directed to them, donated by users for the products they have recommended. We can and must see recommenders as conduits to a given good and, like any business facilitator, they have a right to an agent’s commission. Apart from commission payments, payment may be made to them directly out of gratitude, putting them on equal terms with the creators of products and services. This is logical, taking account of the fact that a good pointer is sometimes no less valuable than the object it points to. And if it makes sense to pay for high-quality information, why not pay also for information about the quality of information?
Since behind every recommendation there are specific people (those taste neighbours of the user on whose judgements it is based), it is possible to distribute post-factum money to individuals without any kind of arbitrariness, on a purely market basis. A particular recommender’s share will depend in the first place on his contribution to the prediction (which we recall is dependent on the taste proximity of the recommender and recipient). In the second place, it will depend on the time when the rating is uploaded. Pioneering raters should be given encouragement. Additionally, the recompense can be increased in proportion to the size of the post factum payment a particular recommender made to reinforce his judgement (and that sum is unlimited). For example, if one of two recommenders has “staked” ten times more money on a work than a second, then, other things being equal, his shareholder’s stake in the prediction should be that amount greater, as should his part of the income received for the prediction. Post factum money is thus not a slice of forfeited income but a kind of stake in a futures derivative which returns income if there proves to be a demand for the user’s prediction. There is a kind of reciprocity: supplementary gratuity payments boomerang back to those who pay. This is a magical quality fundamental to the practice of sponsorship and donation which in no way detracts from their sincerity or attractiveness. At the present time it is difficult to calculate all the consequences of paying recommenders, but overall this scheme unambiguously strengthens the institution of collaborative filtering.
This business model has nothing in common with past attempts to create a cultural stock market to accept predictions of future box office takings from movies in the process of being made. That was reminiscent of a racing totalisator and did not yield good results, principally because the gamblers were not creating any useful information. In our case what we are looking at is not guesswork but the exchange of actual experience, and that radically alters the situation.
3.4.3 Symbolic Capital and Symbolic Values
In addition to symbolic values there is one further category which it is worth learning to measure, and that is the symbolic capital of various people. This is the ability of a person to produce symbolic value: works of art, maxims, texts, communications, anything which engenders quality time which can be measured in second money. Basing ourselves on the assumption that symbolic capital and second money correlate in the same way as ordinary capital and profit, we can put a value on symbolic capital. For this we can make use of existing practice: ordinary capital is assessed by its ability to generate profit, and symbolic capital should be quantified correspondingly in terms of revenue from second money.
Viewed in terms of the production of quality time, symbolic capital and symbolic values are kindred concepts. The difference is that the former is more a potential, while the latter is its realisation. A person possessing symbolic capital produces texts (in the broad sense of the word) and these, in proportion to their value, engender quality time. In actual fact the relationship here is more complex and less clear: works engender quality time and so does the creative artist himself, sometimes directly, without creating a work. His own behaviour is a kind of text. An evening in the company of a major personality is an example of how symbolic capital can translate directly into value. At the same time, a text can be a component of symbolic capital. It is not too much to say that not only a creative artist can possess symbolic capital, but so can the products of his creativity. In order to avoid confusion, however, we shall reserve the term “symbolic capital” for the person, and use the term “symbolic value” in respect of creations, whether these exist in verbal, written, or plastic form. Until capital is mobilised for the production of values, while it is awaiting its moment, it exists only in potential and it is impossible to form a judgement about it. In just the same way, symbolic values sometimes have to wait for their time to come and for them to be appreciated as they deserve. But even when capital is vested in some kind of form and presented to the public, it is still impossible to measure it with the kind of accuracy we are accustomed to in operations involving conventional capital, because it is difficult to predict the return from it. Texts are an objective manifestation of symbolic capital. To continue the parallel with conventional economics, this is a product created in use.
Symbolic capital encompasses the whole spectrum of human attributes affecting the ability to create high-quality communications. These include experience, knowledge, taste, social competence, motivation, charisma, talent, and many other human qualities. Making use of these assets and basing himself on already existing texts, a person creates new texts which possess greater or lesser value, and which circulate in broad or narrow circles. It is not possible to dissect symbolic capital, to weigh its various constituents, but it can be rated overall on the basis of values created with its involvement.
All the symbolic values ever created add to the corpus of culture, increasing its total symbolic capital and preparing the way for the appearance of new values. Symbolic capital, like economic capital, is not self-sufficient: its generative, creative capacity is dependent on external factors, in particular such cultural institutions as (primarily) copyright, the material infrastructure, and the intellectual state of society. A potential movie maker of genius will have no career in a country without a cinema industry and (in all probability) probability theory would not have occurred to Pascal had he not had the game of dice to ponder.
It is of great practical importance that the series “symbolic capital–symbolic product–symbolic effect” leads to a category which can be measured: quality time. If we start from this end, we can quantify the two preceding links. From payment for quality time we can deduce a magnitude for the values which generated the quality time, and from these values we can deduce symbolic capital. In economics the technique for doing this is known as the earnings multiples approach and it is used for valuing businesses. Suppose a business is valued at ten times the annual profit it makes (this multiple depends on the riskiness of the business segment concerned), then the capitalisation of a firm which makes a profit of 1 million a year can be assessed at 10 million.
When speaking of symbolic value we are usually thinking of large, socially significant works, but the term should be understood more broadly: a great variety of manifestations of personality can engender quality time – oral performances or talks, public or private actions, – but irrespective of the kind of communication, a sign of value is a positive response from other people. Needless to say, the degree of influence exercised depends on the size and composition of the perceiving audience.
Symbolic capital is the engine of the New Economy. Just the fact of recognising that it is not only established creative artists who possess it but also ordinary people, is an important step. Ordinary people also emit information and, despite the difference in the power of the effect they produce, they can be included within a single system of symbolic coordinates and ranking. This reflects positively on the self-respect and motivation of members of communities. It is highly relevant to both professional and amateur creative artists, and indeed to people who are neither. In the past symbolic capital was considered the province of an elite but now we find that everybody possesses it. That does not mean that differences between people have been erased, but they are mitigated and from qualitative differences they become merely quantitative differences with many steps and gradations. This enables us to get away from a rigorous division into the first rate and lower levels, which in the past clipped the wings of many people.
With the transfer of activity to web networks, the barriers and distances between people decrease in just the same way as for a business when it enters the stock market. The difference between the founding owners of the company and its shareholders in the general population is evened out, and the transition from one status to the other becomes a matter of quantity. Something analogous brings about the appearance of web media, which are more influential than the traditional mass media.
Although the value of the individual is a widespread postulate in present-day society, it remains largely declarative or only occasionally evident, rather than being something that people really have any sense of. If symbolic capital is measured and presented objectively, this may facilitate a transition from good intentions to actual practice. Given the prospect of social and group recognition, people may develop a taste for a symbolic career. Symbolic stratification is a real need in society, and felt all the more acutely as new groups organise themselves. Objective measurement may provide a crucial impulse. The conditions for this already exist, with communications and the attitudes towards them documented in third-generation social networks. There is every reason for one of the important subsystems for measuring symbolic capital, personal reputation, to emerge in these networks.
3.4.4 Symbolic Capital and the Reputation System
Reputation is what other people think about the merits or demerits of a person, an institution, a business or a good. As regards people, reputation is part of their symbolic capital. It represents a projection of their symbolic capital into a particular sphere of activity and is significant within the framework of a particular community. Reputation performs the important function of predicting and guaranteeing quality. When accessing any information, people trust it to the extent that its source has given a good account of itself in the past. Reputation grows out of attitudes towards earlier activity. To put it simply, most people trust what has proved trustworthy in the past. They trust, knowing that the possessor of their trust will think very carefully before saying or doing anything which might harm his reputation. Brands operate on the same principle: investments in making themselves well-known serve as a pledge of the reliability of the business.
The object of our interest is reputations in formal communicative systems, primarily on the internet where communications are documented and can be categorised. Reputations formed in everyday life from miscellaneous information are not considered here because of the difficulty of detecting them. Obviously, reputational hierarchies depend heavily on which transactions between participants are taken into account, how far they reflect the real relationship, and how much is left out. Naturally, the more representative the registered transactions and the more precisely they are interpreted, the more reliable the reputation. The citation indices used on scholarly publications are a method for calculating reputation as simple as they are crude. They take account only of references to publications, but as no allowance is made for whose judgement this reflects, that of a professor or of a first-year student, or of how significant the context is in which the quotation figures, important information is lost. This has negative consequences for scholarship.
Internet search systems have made great advances in this area, since the order in which they deliver website addresses and pages in response to user enquiries is linked to the reputation of the sources, which is something they first establish. The Google PageRank algorithm computes the significance of a particular internet page using the logic that if a page contains a link to another, it is giving that page its vote. The more votes a page collects, the more important it is assumed to be. Account is taken of the weight of the page which is voting. Search systems have also solved the problem of mutual backscratching, against which academic citation indices have no defence. These principles have acquitted themselves well in practice and should be used when calculating personal reputation in groups.
In social networks reputation has a particular applied significance since it indicates the “quality” of the person you are communicating with. The demand for such grading must be extremely high, and one can only express amazement that the most visited international and Russian social network sites make virtually no use of this tool. There may be a conscious reluctance to reveal who is who, a fear of scaring off a proportion of users by indicating their place in the insider hierarchy.
For obvious reasons, reputations are particularly important in recommender systems, and these have far more data on the basis of which to establish them. If all that is registered in conventional social networks is whether a user has said “thank you” and whether a posting has been awarded a plus or minus, on recommendation sites a whole range of transactions can be taken into account: a user expressed thanks, followed advice, gave a reference, positively or negatively rated a comment, text, or photograph, requested a recommendation, forwarded a posting, etc. A comprehensive system for calculating non-professional reputations is to be found on the Russian Imhonet website, where each of these standard actions carries a particular weighting, depending on how much it contributes to reputation. More than a dozen actions are identified and, just as in the search systems, mutual backscratching is identified and eliminated. Reputation calculated in this way also grades interest in the user and his reliability, so-called goodwill. This serves as a further line of defence for the social network from fakes, fictitious infiltrated users, whose task is to manipulate data in the interests of sundry groups.
In the generally accepted navigation systems which provide ratings, charts and the like, data about the source of information is lacking, and this seriously detracts from the information itself. One cannot tell whose opinion the ratings reflect, and in charts constructed on sales data it is not even clear whether the public finally gave a product a positive or negative assessment. In recommender systems everything is transparent: you can see who is awarding reputation, and how it is arrived at.
A person’s reputation can be calculated not only in general but also in respect of particular areas of expertise in literature, cinema, wines, photography or whatever, but also in respect of groups within whose framework it matters. This is of fundamental importance, since reputation in general is one thing, while reputation in a particular area is quite another. One can compute the overall reputation of a group in the eyes of wider society (or of some segment of it). One can calculate reputation on the basis of particular areas of competence, since it is often the case that a person is strong in one respect, as a critic or populariser, but does not enjoy more general respect.
A multi-factor calculation of reputation is a step forward from anything that has been done in this area so far. Second money will allow us to further raise the quality of the computations. It will enable a network’s participants to express their attitude to each other’s actions or comments quantitatively. PageRank, although it takes account of the density of transactions, does not know how much reputation is communicated in each. This is inevitable, since the search engine is dealing with inanimate objects, and websites cannot be asked directly how well one thinks of another. Accordingly, the order in which search engines deliver information sources in response to user enquiries does not reflect reputation in its precise meaning. A collaborative system, on the other hand, operates with unambiguously treated signals from people, which enables it to replicate precisely the logic by which off-line reputations are built. If to this we add the difficulty of mechanically interpreting the aims of enquiries, we can see how search results might be improved. For this, search and recommender technologies should be brought together to place a collaborative superstructure over the search system. That is, first a search should be conducted using the customary methods, and then the sources found should be filtered with the aid of a collaborative algorithm, promoting to a place among the first entries for a particular user what has previously satisfied enquiries from his reference group. This could create a next generation search engine capable not only of selecting information sources, but also of delivering them in accordance with the priorities of each individual.
3.5 The Mechanism of Social Revolutions
3.5.1 Three Mechanisms for Social Innovation
We have discussed consumer testing, voluntary post-factum payment, new business models in the digital industries, and other innovations which in the near future will be widely implemented. These are topics which impinge directly on business interests and are accordingly being actively discussed in professional circles and the media. The latter are, of course, themselves at the epicentre of events and seriously concerned that the content they create migrates to the internet, there to be consumed free of charge. The digital sphere which created the problems is likely to be the first to find a solution to them.
The question is, how and when will this occur? The practices and norms of behaviour described above are subject to a network effect: the more people profess them, the greater their value for each individual and therefore the more assured is their growth. While there are still not enough people involved, the web (like a club) is not all that exciting a place to be. A telephone would be no use if you were the only person who had one, which meant there was nobody to call. The system of collaborative filtering does not generate high-quality recommendations if it only has a few users. This is the so-called “cold start” problem. One has to surmount a certain quantitative barrier before stable, self-willed growth will occur. The new group will accumulate the critical mass of supporters required for lift-off only when a sufficient number of people believe in its value, and this comes through its being used. There is a chicken and egg situation here. The difficulty of institution building (as of any innovation) is precisely the synchronicity of these processes. From time to time, however, the new does manage to break through. Of particular interest are changes which are not imposed from above but grow from the grassroots, free activity of people, and this is a characteristic of the New Economy. Such novelties are closely linked to the mood of the masses, which in turn is associated with the dissemination of information within a society. The mechanisms of dissemination have been one of the least studied aspects of social life, hence the groundbreaking nature of the research discussed above. At the same time we have stressed that the most convenient bridgehead for such experiments is social networks on the internet. We need only to decide what to focus on, which is what we will do now.
There are many reasons why it is desirable to have an understanding of the mechanism of social start-ups, and to be aware of the conditions under which the institutions of the New Economy (collaborative filtering and micro-patronage) can take off is important for the present investigation also. This is what we need to concentrate our efforts on.
Much remains obscure about how institutions, standards and customs emerge, even when this occurs right under our noses, to say nothing of the distant past. Familiarising ourselves with existing descriptions of this process, we often have the impression that something fundamentally important, something decisive for the outcome, is being missed. Sometimes social consciousness registers the fact of the birth of institutions only after a long delay. Even when they are up and running, it is by no means the case that they are immediately understood. For example, the importance of even such an exceptional invention as the department store was not fully understood or appreciated at the time. There was, after all, more to the idea than merely offering a variety of goods in a compact area, which had already been done in ancient markets. The department store (one of the first opened in Milan in the second half of the eighteenth century) caused an information revolution by reducing transaction costs of haggling for the best price. Before this, the shopkeeper himself haggled with the customer, which meant he was constantly busy. In a department store prices are fixed and openly displayed, so there is no need to haggle. This makes it possible to put a less qualified person behind the counter while the entrepreneur gets on with expanding his business. A similar benefit, from the viewpoint of the customer, was that previously sorting matters out with the shopkeeper was the duty of the lady of the house, whereas now the servants could be dispatched to shop in the department store.
Or another example, image advertising. For a long time economists were perplexed by whether this was a form of coercion of customers. It was decided that it was a fundamental institution with roots in the most ancient practices of sacrifice. By paying for this, the advertiser was giving a pledge which signalled the quality of his goods. (A substantial contribution to the theory of this kind of signal was made by Michael Spence, one of the pioneers of information economics theory.)
The promotion of institutional innovations, and of particular techniques to enable them to enjoy a viral distribution, comes almost entirely under the heading of search and serendipity rather than of routine technology. There was a particularly dense fog surrounding the early stages of the process, and this drew our attention to the theory of superstars, snowball theory, and cheap talk theory, which were elaborated in connection with such situations. They were all formulated at different times, but for us what matters is to link them together and apply them to a wide range of phenomena. Although by and large only the theory of superstars can be considered a real theory (and that only at a pinch), while the other two concepts are one step down, the significance of this trio for understanding social evolution can hardly be overstated. I am certain they deserve to be far more widely known and receive a great deal more attention than they do at present.
The central question addressed by the theory of superstars is, why, out of numerous representatives of the creative professions, are particular individuals found at the top who are being paid astronomical sums? This is much the same question as we are interested in: why do certain innovations take off, while others do not? One of the explanations, propounded at various times by Adam Smith, Alfred Marshall, and Sherwin Rosen, comes down to the fact that several run-of-the-mill singers cannot replace a single Chaliapine. Because of immense demand, an outstanding talent becomes a superstar. This thesis is not entirely convincing. If at a given moment there is a shortage of Chaliapines, the stars do not disappear. Sheer force of talent, at first sight the obvious and self-evident explanation, will not do. It might close off the argument if somebody was head and shoulders above the rest and a star of the first magnitude, but such instances are rare. Far more typical is a situation where a pool of competitors is in no wise inferior to the leaders and would seem perfectly well able to attain those heights, only it doesn’t happen. A huge disparity in salaries may (and does) occur even when there is no particularly marked difference in talent. Moshe Adler associated the limited number of stars with the limited space on the Mount Olympus of social attention. Consumers do not need an infinite proliferation of celebrities, because then it would be difficult to find someone else to discuss them with. If everybody chose their own star, conversation would dry up.
This explanation is wholly in the spirit of the group economy: empathising with popular performers, being up-to-date on hot topics, works, events, enables people to minimise the costs of making and maintaining contacts. The motive behind such synchronisation of preferences is rational use of such resources as attention, memory and time.
People have a great urge to socialise, and what provides the excuse is not all that important. A superstar is created, not so much because of the performer’s inherent qualities, as by the need to satisfy a demand, that is, an extraneous factor from the sphere of information economics. Similar considerations apply to omens and superstitions, whose specific semantic content is fairly arbitrary but whose purpose is to reduce the number of options of behaviour, and thereby the costs of choosing. This probably all sounds a bit odd, but everything falls into place if we treat human attention as a resource for which the competition is no less fierce than for customers with disposable income. The fact that the explanations proposed by the theory of superstars do not seem obvious only shows how novel the logic of the New Economics still is.
If superstars are recruited from a pool of people of approximately equal talent, how come one of them rises to the first magnitude? The answer is to be found in the metaphor of the snowball, which explains the ultimate breakthrough into popularity as being due to an advantage at the start. The handful of snow which starts rolling first gets more snow stuck to it faster and more easily, and leaves less behind for its laggard competitors. As applied to top internet sites, this means a critical mass of users has been acquired who have mastered the useful functions, become familiar with the layout, provided themselves with contacts (“friends”), and communicated something about themselves. In a word, they have built a home in cyberspace. For a competing resource to lure these people away needs a Herculean effort, which can be theoretically quantified as the time and money the user will need to spend in order to adapt to a new site. Let us suppose that it is going to take 2 h to become familiar with the new place and sort everything out. If we treat this as low-paid work and assess it at a rate of $5 an hour, then to attract a single user will cost a minimum of $10. In practice the sum will be higher, because something needs to be added by way of compensation for emotional attachment to the earlier habitat which the user is being asked to sacrifice. The cost of attracting that first, most difficult, million users adds up to a very appreciable sum. In addition, those creating the network do not know exactly when or how they will be able to convert the heads they have hunted into revenue.
It is still unclear, however, why one particular handful of snow should be the first to start rolling. The theory of superstars has no answer, although from a practical point of view this is a crucial issue. In order to find the answer, let us turn to the concept of “cheap talk” from game theory, where the term is applied to communication at the start of a match which determines its course. A textbook example is bluff in a game of poker. Economists have already used the term “cheap talk” to explain the success of advertising campaigns in which magical qualities are ascribed to goods and a unique aura to brands (Birger Wernerfelt). For example, a certain company claims that people who buy its products will find the solution to some kind of insoluble problem in their life. For certain users, gullible or who merely enjoy trying out anything new (and 2 or 3% of society consists of such people), this is enough to make them choose the product, despite the fact that the producer’s assurances may be entirely groundless. Hence the epithet “cheap”, because unproven. If a sufficient number of such people is acquired, the snowball is on its way. Thereafter it grows without outside help in accordance with the scenario described above. A claim, thrown out into society virtually at random, gains weight and may become a self-fulfilling prophecy. This is how certain goods focus demand on themselves. On the whole, it is to the advantage of customers because the price will go down. The same applies to ideas and rules of behaviour. If everywhere one constantly hears that gentlemen prefer blondes, they really will.
If a horoscope prescribes that a capricorn is steadfast and an aquarius contrary, that is often how they will be perceived. A necessary condition for bringing people with similar preferences together is to identify a cause and a rallying point for the first adherents. It is relatively easier for others to join the pioneers, and the numbers swell in accordance with snowball logic. People prefer the most successfully launched goods and brands, despite the fact that others available at the time may be no worse. Thus does “cheap talk” act as an impeller to get things going, and thereafter the course of events can be directed along a desired scenario. The same result is achieved by extravagant advertising, a path chosen by producers confident of the quality of their goods, and also by those who find it most convenient to make a pledge through image advertisement to persuade people of the quality of their goods. Out of context and without an understanding of the role ascribed to it, “cheap talk” looks arbitrary and fairly senseless, but in fact it does make sense, and considerable sense at that. It focuses people’s attention on a relatively small number of alternatives and minimises the delays and costs of choosing.
3.5.2 The Art of “Talk” in Contemporary Art
The range of phenomena where the logic of cheap talk operates is extensive and includes, for example, political and stock market processes, scientific doctrines, a variety of subcultures and much else. These, if not viewed from the perspective of the trio of theories we are considering, can seem to be merely the play of chance or a scenario rigorously programmed by circumstances. Certain sectors, for example, sport, where in most cases quality is objectively registered, are relatively immune to the influence of injections of information. Even in sport, however, winners are sometimes “appointed”, and even in engineering technology, which would appear to be wholly resistant to manipulation, “cheap talk” plays no small part. It finds scope, for example, where there needs to be a decision on the financing of a particular technological policy and where the merits of the alternatives are not, at that moment, entirely obvious.
As regards contemporary art, “cheap talk” has acquired such power that it often stands in for the works themselves. Artists organise surprising actions, for example, wrapping buildings or bridges in cellophane, or exhibiting doggie doo and seek thereby to shift the boundaries of perception of the public. Pictures, installations and performances may be full of profound meaning (like, no doubt, the above-mentioned works which all caused a great stir at the time), or they may have a less impressive content or even have no meaning at all, amounting to the purest placebo. Their mission is to assemble a group of people, and it doesn’t much matter if the king is found to be wearing no clothes. Taken outside the logic of group or club communications, the happening may have no significance whatsoever.
The apotheosis of this tendency has been the flash mob event, during which people get by admirably without an artist. All they need is a coordinating signaller. Indeed, if at one time artists rejected form, then content, why are they now needed at all? Matters have developed to a point where their services are rejected. Why should the public not ask, “What’s so special about him? We’ve got polythene too”.
In the spectrum of emotions evoked by contemporary art, awe is far from enjoying first place. This has facilitated the appearance of a new trend in culture. While the artist was surrounded by an aura of unattainability, seen as God’s elect, expected to produce a shock effect on viewers, few ventured to try their strength as creators and appear in public. As soon as this restriction was lifted, a great flood of amateur creativity washed over culture. In part this brought about the burgeoning of the blogosphere, where users do their darnedest to create their own flash mobs and to glory in their moment of fame. This also explains the exaggerated popularity of broadcasts in which dilettantes squawking like chickens sing some mediocre ditty and the audience joins in. A revival of amateur artistic activity is occurring on an amazing scale, and although at first the downside of the process is more in evidence, in future this will benefit culture, including its more exalted realms. Just as football in the courtyard feeds into the national team, so karaoke produces at least excellent listeners. Taking part is far more positive than passive viewing, although at the present stage there are some grounds for concern since the pyramid of taste is sinking towards its base and showing no signs of rebounding any time soon.
While the masses, having gained a different level of access to culture, are trying their strength in creativity and deliriously pursuing recognition at group level, those for whom artistic creativity is their bread and butter are selling out. They are increasing their output and producing weaker and weaker products. A host of competitors of a new kind is taking a slice of the attention market away from them and the quality gap between what they and their rivals produce is narrowing. With the appearance of Web 3.0 tools, users are taking over the function of experts and discovering talent themselves, putting publishers under pressure. Similar threats hang over writers, at least those of average or little talent. They are being successfully replaced by bloggers. If by and large the democratisation of culture can only be welcomed, one hopes that artists will not surrender to cheap talk to the extent that they are reduced to the status of a beacon for someone else.
3.5.3 “Cheap Talk” as a Signal to Diverge?
In the theory of superstars the sequence of cause and effect is reversed: the actual qualities of an object mean little by comparison with the dynamics of its popularity. This provocative thesis not only throws light on the origins of superstars but also enables us to look differently at a whole swathe of social phenomena where we are puzzled that the situation has resolved itself in one way rather than another. Where there is a sheaf of possible scenarios, there is always a cheap talk competition to decide the winner. This process is not as random as many people believe.
The spring which drives the mechanism is that human need for high-quality, congenial contacts which is satisfied by joining a suitable community. Superstars, brands, religious trends, political parties, fan clubs (and also such issues as whether to eat your boiled egg from the big end or the little end, or how to perform religious rituals): it is all a matter of finding ways to split up into groups, thus optimising one’s social habitat. No small contribution to this is made by art, which thereby demonstrates its social credentials. Artists are emitters of artistic statements and the public serve as a filter which allows works through or blocks them depending on how far they meet their need for social consolidation or fragmentation. On the whole it does not really seem to matter so much what the issue is, so long as it is presented starkly. This will prove the occasion for people who are in some respect outsiders to differentiate themselves from others, to diverge, and for those with some degree of kinship to coalesce. In retrospect researchers imbue clashes which played a notable role in history with portentous meaning, but the truest and deepest meaning is simply that a question is openly raised which provokes people into thinking, deciding their position, and adhering to one side or the other.
Facing people with a dilemma is an ancient and infallible method of dividing society into camps based on the documented attitude of people to a doctrine, a topic, facts, or a ritual. This same universal technique is at the basis of today’s mutual filtering, which people use on each other and in which all manner of means of attracting one’s own sort of person are brought to bear: dress, manners, actions, provenance, links with mates from school and college, and general acquaintances. In the computerised version, selection is conducted also along lines of tastes, preferences and interests. The organic way in which collaborative filtering is spreading on the internet derives from the fact that precisely these procedures (only more spontaneously and less consciously) have always been used. Present-day technologies have merely automated and perfected them.
3.5.4 Social Imprinting
If everything relating to the formation of superstars was really so much a matter of technique and the knack was merely to get known first, why would not everybody try their luck? Alas, the trick is not so much getting off to a quick start as managing to pluck the right heartstrings, since otherwise the snow (the resource of attention) will not stick. The trio of laws described above gives no guidance as to what happens at the very earliest, embryonic, prenatal stage of preference formation, and yet this is precisely what will decide which offering of cheap talk will be taken up. Without this link the whole chain of deductions is like a rope ladder which does not quite reach to the ground. This link, I suggest, is related to imprinting, which was popularised by Konrad Lorenz.
The essence of this phenomenon is conveyed by a photograph which has found its way into the textbooks: it shows Lorenz walking along, followed by a group of goslings. It transpires that, shortly after hatching, goslings seek the image of their mother. They identify her solely because she is the first thing that moves in their field of vision, and follow her. It is important who or what that first thing is. In the experiment it was Lorenz, and the goslings were drawn after him as if after their biological mother. His place could equally well have been taken by a moving balloon. They would have accepted that and, having made their choice, would have completely ignored their real mother if they met her.
18.104.22.168 Imprinting in Man
Imprinting has been most fully studied in animals. It has been established that before they appear in the world their first steps are programmed in the form of a collection of reactions to the conditions they are going to meet. These investigations, together with much valuable information about instincts and the organisation of the brain, have enabled us to unlock the secret of how canaries sing, and to explain how a salmon travels all the way back to its long abandoned birthplace to lay its eggs. The topographical sense of bees and wasps is also a result of imprinting. As regards human beings, they have been studied less since one has to be considerably more circumspect in handling them. A researcher is hardly going to shove some inanimate object in front of an infant in a bid to replace its mother. Although it would seem that for a whole variety of institutions which fish for human beings (politicians, traders, clergymen) there could be nothing more tempting than learning how to programme personality and mould character.
For all the sparsity of the data about imprinting in man, the mechanism is so important for the New Economy that it is worth trying to generalise everything which in one way or another relates to it.
We know that people are more vulnerable to imprinting in certain states of mind, which leads to an ineradicable acceptance of something. During these particular periods in their life, certain images, positive or negative, can become fixated. These then assume an unchallengeable significance for the person and this persists for many years, predetermining principles, inclinations, methods of perception and so on. Periods of particular impressionability are found mainly in infancy, and to a lesser extent in early adulthood. Something similar to imprinting does, however, also occur later, although in a less pronounced form. In infants subsequent social conditioning has yet to blur the picture, so they have been studied more and with greater success. In adults imprinting tends to be confusingly overlaid by other effects (which does not mean that it is not there). Something which supports the view that late imprinting exists is the fact that certain mental techniques make reimprinting possible (Timothy Leary), that is, reprogramming the alignment of an individual’s personality. The practice of conditioning a person away from alcoholism is probably a phenomenon of this category. Imprinting in man is not a lifelong trap, but immense efforts are needed to escape from it. Occasionally this does happen spontaneously over time. It has been established in recent years that it is possible to deliberately edit memory with the assistance drugs. Unlike the inducing and development of conditioned reflexes, as studied in Pavlov’s dogs, with imprinting the fixation is stamped on the personality on a single occasion, without repeated stimulation. The necessary condition for imprinting is an emotion which sears the depths of the soul, which is probably applicable to any persistent effect in the mind. The durability of an impression is always related to the strength of emotions, and these in turn depend on the mind’s willingness to respond to the stimulus, and on its magnitude.
22.214.171.124 Narrative Knowledge About Imprinting
Psychology has neglected the topic of imprinting in man. Perhaps in the past there was a lack of suitable tools for such complex research, so that it was impossible to produce experiments which could be replicated. Today the issue just seems to be overlooked. The present-day economy is an interweaving of motivations and desires and, engaging with it, we simply cannot ignore what lies at the root of the forces which motivate a human being. Observations suggest that imprinting is fundamental to much of human behaviour, if less incontrovertibly than in our fellow animals.
The fact that psychology is ill-informed about imprinting does not mean that we do not meet its manifestations in everyday life. How many stories have we heard about how a child has developed an obsession. Some uncle eight times removed has appeared briefly in the family wearing his captain’s cap and before you know it a boy is halfway to naval college. Something of the kind is behind the history of Peter the Great’s toy flotilla, said to be behind his later love of the navy. Perhaps a boy has seen how girls his own age dress their dolls and much later discovers a predilection for that particular kind of appearance. The role of imprinting in matters of the heart is not entirely clear, but is certainly significant. At the very least it can be traced in the perception of types of beauty, which are laid down by toy factories and polished by advertisements and glossy magazines. And of course there is that fatal sensitivity, made the more powerful by long abstinence, which creates a fertile soil for love at first sight. Imprinting influences also the particulars of sexual behaviour. It seems that a majority of transvestites were dressed in clothing of the opposite sex in childhood. Another example, from the Middle Ages, tells how the sight of a naked body beaten with rods was a source of erotic stimulus for many. In the modern age, with the abolition of such public punishments, sexual perversion of this kind has become rarer. Relations between parents and children, etched in the subcortex, cause children when they grow up and themselves become parents, to replicate past experiences, negative as well as positive. Having suffered in the past, people are unable to free themselves from the programming.
Imprinting manifests itself in phobias and fears, in the mastering of languages and in many other ways. Various viral effects are evidently associated with it (including those on the internet), when some craze becomes instantly popular. Successful advertisement too cannot do without it. Delving back in memory, a person may sometimes resurrect some pivotal moment when a decision was suddenly ready to be adopted, or a new way of perceiving, or an attitude to the world (aesthetic, pragmatic, moral, religious, scientific), which became dominant. At some point in the distant past a switch became fixed at a certain position and the child grew up to be an activist or a conformist, preferred numbers or letters, inclined to a particular way of filtering information, or formulated questions to the world which would guide him for the rest of his life.
How educators would like to catch this moment and direct the child’s powers in his best interests as seen by his parents! If only they could etch into his subconscious a “correct” attitude so that thereafter everything would go swimmingly and the person would absorb only what was wholesome. No such precedents are known. Influencing consciousness is the province only of specialists in the behavioural rehabilitation of alcoholics, and brand managers. When something is brought close to an infant’s eyes or he has his attention focused on some detail, he is being given the keys for further, subsequent perception. Science does not, however, tell us how to manage imprinting. Nobody presumes to predict what will take with a particular person, or when.
126.96.36.199 Imprinting and Attention Economics
Much remains obscure about a person’s imprinting, but one thing is not in doubt: it is linked to the mechanisms of decision making and hence plays a role in the economy of desires. It is of fundamental importance that, even before a person finds himself faced with a choice, he is largely preprogrammed, not only by ordinary, rational considerations but also by certain crucial episodes in his past or, more precisely, the imprints they have left in his mind. An economic logic can be found in imprinting itself. In essence it operates like an information filter, letting one thing through but blocking another. The whole system of personal attitudes may be regarded as a complex hierarchical filter. Along with early, basic imprints it includes later images, acquired reflexes, fixed fantasies, motivations – all of which determine attachments, habits, perception, character traits, and indeed the whole way of life. Each new imprint attracts or rejects, conditioning selectivity in the reception of information. There is thus a direct link between imprinting and attention economics. The mechanism simultaneously delimits permissible decisions and serves as a regulator of the intake of information. Without it the human brain would grind to a halt, unable to cope with the torrent of signals.
If we approach this field with strict criteria, only a fairly narrow circle of phenomena will be found to relate to imprinting as such, but a whole series of kindred mechanisms operate similarly, leaving impressions in the human mind. These traces may differ in depth and clarity and may result from repeated, deliberate conditioning rather than occurring as instantly and spontaneously as imprinting. There may be substantial differences between them and in how they affect consumer behaviour, but for the time being the mechanisms of impression have been insufficiently researched and we cannot fill in the detail. The main point is that, quite independently of marketing techniques for leaving impressions, the human mind is peppered with them. The moment decisions have to be taken, past impressions operate like railway points, directing actions along one track or another. The mechanisms of acceptance or non-acceptance also vary, depending on an unconscious willingness to allow something into oneself (imprinting) or a conscious opposition to, for example, advertising. We can draw an analogy with the procedure of weighing something, picturing the imprint as a heavy weight placed on the scales and all the other motivations in the form of makeweights added to one side or the other and overall adding up to a considerable weight in their own right.
As the personality develops, the system of filters which help the mind to process reality becomes more complex. New impressions are either overlaid on existing ones or are rejected by them. Before they are allowed to settle into the existent motivational framework, certain conditions have to be met and they have to cohere with previously acquired imprints. Dependent on that, certain impressions (motivations, desires, habits) find the road ahead open, while others find it closed. Imprints thus work directly on their own account, but also indirectly by programming subsequent inclinations. These fundamental and derivative impressions are consolidated in the mind as habits and a way of life and are interlinked by a thousand different threads. You cannot just tear one out of the fabric. A person who is overweight will find it difficult to decide simply to give up eating an evening meal. If an innovation conflicts with the system of impressions already established, the latter will try to nullify it.
188.8.131.52 From Imprinting a Person to Imprinting a Society
Imprinting can be found also in a society. It influences the direction taken when there is a fork in the road of history. When they try to interpret such moments, people often talk about providence or chance, but imprinting suggests a different explanation. At the social level this mechanism manifests itself in the totality of imprints most frequently encountered in people, in basic values and ethnic stereotypes which regulate many different spheres: the attitude to love, discipline, faith, puritanism or hedonism, usury, a leadership-or-community orientation. This amalgam is given the somewhat outmoded title of national character. Common morality arises from the synchronised imprints received from a shared environment with stimuli which are similar for everybody. Cliches are drawn from language, folklore, texts and everyday life, from “culture” in its broadest sense. Life sets traps for the mind, but also helps it to become receptive (imprint vulnerability), creating a tension if something is lacked and as a result evoking a dramatic emotional reaction when a desired stimulus occurs. In different geographical regions the mind of a child is “stamped” in different ways. The national way of life has its unique primary socialisation of children. Local norms of swaddling, hygiene, safety, obedience and independence, cuisine informed by maternal warmth, the culture of emotional upbringing within the family, these are the rocks to which personality is anchored. Since the norms of everyday life are the same for everyone and permeate all layers of society, this cumulation of deeply buried stereotypes awaiting their hour is also largely homogeneous.
Imprints are synchronised within one generation but are transformed by the next. Conditions change, and by no means all the images which so captivated parents can be handed on in the relay race. Descendants are not receptive to all of them. The imprinted images of generations differ all the more strongly if the realities of childhood years have been greatly different. The experience which parents have accumulated may prove invalid under new conditions. The changes which have taken place in the digital age have engendered a so-called prefigurative culture, a situation where less that is useful can be passed from parents to children than would have been the case if change had been less abrupt.
Intergenerational fractures have occurred in the past, as an extensive literature testifies, but today’s is unusually complete because of unprecedented changes related to a sharp rise in the productivity of labour and a change in the way leisure time is used. There is now more free time than working time. The emphasis has moved from work to consumption, from saving to spending, with a corresponding transformation of morals. This, indeed, is what has brought about the New Economy, and a marked re coding of personality.
184.108.40.206 Imprinting and “Cheap Talk”
We have dwelt in detail on imprinting in order to explain why in certain instances “cheap talk” takes off, while in others it remains mere hot air. A certain percentage of people in society are receptive to the new. If cheap talk falls on fertile imprinting ground, these people will easily overcome the inertia of an easy life and comprise an initiative group capable of growth. Since much “talk” is going on at any given moment, some of it will hit the target, like a bullet from a random burst of machine-gun fire. People need to be “hit” in this way because, for a group economy to function, a certain rate of renewal of communities is essential. New codes are devised so that, when a club becomes too full, its leaders can use them to partition themselves off again. From numerous alternatives, the one adopted is the one which meets an unconscious demand rooted in new imprints. A process is initiated in the social cauldron with its law of large numbers which it is difficult to replicate under laboratory conditions. Such an interpretation of innovation is at variance with the popular viewpoint, which sees novelties as a single, calculated shot from a sniper. In reality, the birth of innovations is a two-stage process of emission and filtration. The first phase is the production of trial offerings, while the second is collective selection. Creativity is itself a two-stage process. Ideas enter a person’s head and he mentally filters them until he arrives at an insight. To start a “talk” and present it to society for judgement is the first part of the task, and that is the responsibility of the inventor, the creative individual. No less important, however, is the second part where one particular “talk” is taken up out of the many. This is the stage at which social imprints come into play.
A community develops its own system of filters, and until a happy combination of these is found a high-quality idea usually has to wait. Sometimes the moment is irrevocably lost. Just as a person with his bounded rationality makes do with an option which is satisfactory rather than ideal, so the community sometimes cannot wait for an ideal solution. An institutional trap appears: if some norm has become firmly established in practice, it is difficult for an innovation to displace it. If one scenario has been realised, it is difficult to move to another one, even if the latter is objectively better (Douglass North). In just the same way, people find it difficult to change their habits because they become encrusted with a host of other habits, practices, and obligations. The textbook example is the QWERTY keyboard which goes back to the days when mechanical typewriters were being distributed. More ergonomic keyboards have been proposed (for example, the COLEMAK layout), but we continue to use the old layout because everybody has got used to it. Today’s users prefer not to relearn. Manufacturers have to take account of this and block innovation, afraid of weak demand.
It would be interesting to estimate the qualitative characteristics of imprinting and, in particular, to understand what is needed to get people to start the juggernaut of innovation moving. The sociology of fashion (and fashion is intimately linked to imprinting) enables us to estimate the number of groups which are potential carriers of a “virus”. Fashion innovators (those particularly susceptible to innovation) are approximately 2–3% of the market. After them come early adopters, 10–15%, then the fashion followers – up to 35%, and finally the laggards – 35%. The latter two groups, which comprise a total of 70% of the market, are those who will take up a trend in the course of a year to 18 months. The remaining 13%, conservatives, do not join in the game. It is not impossible that 2–3% (or thereabouts) is a universal number which determines the size of a target group receptive to “cheap talk”, not only in fashion but also in other areas. Thereafter the innovation is capable of spreading under its own steam.
If we suppose that the 2–3% mentioned are sufficient to launch a variety of processes, the question arises of why precisely this number of adepts is capable of acting as a catalyst for subsequent steady growth. We may suppose that if an innovation has caught the imagination of 2 or 3 people out of every 100, then this will become known in one way or another to their close circle. Because this averages several dozen individuals for each of them, through direct interpersonal contacts news of the innovation will reach the whole 100, and everybody who so wishes will be able to sign up. Needless to say, to create the critical mass it is essential to have that many people so that, through word of mouth, they are able to inform the rest of society. If what we have said is true, then a decisive role in the fate of “cheap talk” is played by a horizontal information cascade, that is direct communication between people. The vertical flow of information also exercises an influence. If powerful information channels like television are mobilised, then in order to launch a horizontal cascade fewer than 2% may be needed, and if it is not, more than 3% may be required.
220.127.116.11 Imprinting and Advertising
As noted above, economists explain branding in terms of “cheap talk”. Brand managers rely on imprinting, sometimes deliberately but more often intuitively. The credo of designer logos is to coincide with people’s latent expectations and transform the advertised good into a fetish. Spiderman, Bionicles from Lego, accessories for Barbie and other symbols of their time are directly inserted into people’s subconscious. To the same category belongs the marketing of food additives which are parasitic on the health cult, the preaching of pseudo-religious theories, and other more constructive phenomena. The stage is taken not by random goods and services, but by those which appeal to particular mindsets. Moreover, it is a matter not of the rational expectations of a particular quality, with which conventional marketing operates, but of slumbering, deep images. The interconnections are often far from obvious. For example, it is claimed by branding specialists that for North Americans personal independence is associated, amongst other things, with toilet paper. Accordingly, advertising for this associates it with other attributes of independence. Peanut butter is associated with maternal warmth; Swiss watches, Belgian chocolate, Russian kvass, Italian spaghetti, French cheese and Chanel’s little black dress are iconic goods different for each culture but deeply embedded in the system of associations.
Imprint favourites are associatively linked with a number of other goods and draw them in their wake, lighting them up with their aura. When market researchers question people as to what they would like their next car to look like, the answers by no means cover everything that will really influence them at the moment of choice. They talk out loud about the size of the luggage compartment and the position of the air intake, but actually picture themselves in a brutal jeep, heedlessly careering across the prairie, or dream of a retro limousine, James Bond’s car, or an armoured car with the hatch raised. Their choice is determined by a cocktail of fantasy and common sense and, other things being equal, the company which gets the fantasies right is the one which makes the sale.
So when “cheap talk” engages with imprinting, the ignition fires. If we count up the total costs needed to bring these things triumphantly together, we will see that ultimately cheap talk works out far from cheap. Sometimes immense efforts and many attempts are needed to hit the G-spot and bond the image with subconscious aspirations. The weaker the message and the less accurate its targeting, the more repetition is needed to lodge it in people’s heads. Success comes not only to dazzling advertising campaigns but to many others. It just costs the less talented ones a whole lot more.
3.5.5 An Illustration From Science of Phase Transformations in the Community
The plan of social innovations proposed needs further work and needs to be made more precise. In particular, the snowball analogy, on closer inspection, proves more delightful than accurate. The automatic further growth of communities after a critical mass of supporters has been reached is seen as self-evident, but real snowballs as they roll downhill increase only to a certain size, after which they cease to grow. Why, then, do social snowballs keep growing to avalanche proportions? The analogy looks only at the raw competitive advantage of the first snowball which deprives the others of snow, but the actual growth process is not considered at all. What does the critical mass depend on? Is the number of people in the community important or are there other factors?
In order to answer these questions, let us resort to an illustration from natural science, from solid body physics. Such parallels are considered questionable, but nevertheless the restructuring of inanimate matter strikingly resembles the processes in society and helps us to make out what is going on. In some respects atoms do behave like living beings: they are not indifferent to their environment, they can change their neighbours, separate into groups with a different mutual disposition, and block off enclaves with barriers or borders. In a word, they can do the same things as people in a community. As a result of such restructuring, caused by a change in external conditions, a solid body alters its interior structure and attributes. One of the main mechanisms for such transformation is the engendering and growth of new embryos. That is, just as in society, within an inanimate system there arise hearths of growth with different attributes which gradually expand to restructure and reconfigure the initial matrix in their own image.
Physicists have given this process an explanation which may be of interest for the humanities. The first experimentally established element is the need for acquisition of a critical mass. Under particular conditions all the atoms of a substance are drawn to assume a new structure, but the only communities successful are those which have attained a certain size. Having done so, they begin to grow steadily. Those which have not reached the necessary mass are re-absorbed by the initial matrix. The growth of viable conglomerates continues until they encounter other conglomerates similar to themselves, engendered in a different hearth and developing in parallel. At that moment the process of transformation ends, since there is nowhere further to grow. All the atoms have been allocated to communities.
Why do new embryos arise? This is explained by a gaining of energy in the new configuration (which results in a distance between the atoms being established which is optimal for the new conditions). Putting it very simply, in this new distribution the atoms feel better than they did in the old one, and if we extrapolate this explanation to the community, a better group environment arises. Why are not all conglomerates inclined to grow, but only those sufficiently large, when it would seem to be equally advantageous for all to restructure? The obstacle proves to be the new surface (returning to our lexicon, the boundary of the group). At the boundary the new formation comes into contact with the initial mass and a tension arises there. Always and everywhere the abutting of the new and the old results in a problem zone. If we are talking of people they, as it were, find themselves falling between two stools. And if we are talking about particles of matter, then if in the border regions between them the distance is not optimal, extra energy has to be expended to keep them in that position. A small growth hearth may have insufficient resources to build its own boundary, which is essential for separation. The new, harmoniously constructed volume grows by a cubic function from its width, while its surface grows quadratically. But as at the start the quadratic outpaces the cubic, so the forces of surface tension hold back initial growth.
In other words, the larger the new formation, the fewer the number of its participants fated to live in the discomfort of the new frontier strip. Growth enters a steady phase when newbies immediately land in a favourable environment in such numbers that the heterogeneous neighbours have no impact. Until social density is sufficient, the ties of the past pull everything backwards.
We have deliberately anthropomorphised a natural process in order to emphasise similarities. Let us return to the social coordinates. The analogy of the energy advantage which moves particles is, in respect of new social group formations, improved quality of communications. People join a group because they see potential quality time for themselves within it. The group grows unobstructed from the moment new recruits arrive in a more or less comfortable environment. The main thing the group economy should learn from our analogy is the role of boundaries. Creating these requires an upfront investment. Historians say, “The rich get richer because they are rich”. Correspondingly, the poor lock themselves inside a vicious circle of poverty. The analogy from physics allows us to develop this idea by pointing to the importance of barriers to growth.
The three mechanisms we have examined bring us to an understanding of how all sorts of doctrines, ideologies, beliefs, moral imperatives and the like crystallise out in the human mind and determine the behaviour of the masses and the course of history. “Cheap talk” gives rise to the “snowball”, which in turn brings about the formation of “superstars”, and the latter conquer hearts and minds which, for their part, are ready and waiting to be conquered. Social imprinting completes the chain and helps us to understand why one “talk” gets taken up while another does not.
3.6 Collaborative Filtering and the Prospects for Democracy
Collaborative filtering can also have political applications. In economics, the theory of social choice (propounded by James Buchanan and Gordon Tullok) analyses political institutions. Attempting to expound it here would serve no purpose, but it will be helpful to outline how collaborative technologies can be applied in this sphere.
The basic idea is that universal, unhampered and unmediated socialising of each with each and of each with all brings about major changes in the nature of the electoral environment: its coherence, informational conductivity, activity and reactivity will be increased many times over. If people are given the tools to unite in meaningful coalitions, the quality of the electoral process is likely to improve markedly. Rules and procedures for forming institutions of government are firmly tied to communication facilities and the sophistication of the electorate, and accordingly there will be a powerful impulse towards transforming these institutions. It is worth considering how the political (electoral) toolkit would be renewed, and possibly beginning to redesign it in advance, and even start trialling it.
If the technology of filtering had not been invented for resolving problems of consumer navigation in culture, it would need to be invented for developing democracy. We can detect the beginnings of collaborative filtering in the current procedures for elections: the electorate is grouped along territorial and party lines, although admittedly both are imperfect ways of aggregating interests. For example, living in the same region may imply a certain community of interests, but this is by no means total. Intellectuals in Kostroma and Vyatka may be far closer to each other than to their neighbours from other social strata. As regards the division into parties, it faces many people with a dilemma: whether to rally under the banner of the dissidents or abstain from voting. No small percentage of people chooses the latter of these two evils, finding no candidate they respect, or recognising that the one they do favour will not gain the necessary number of votes. In this way, their interests, and also the interests of others who voted for candidates who did not pass the prescribed threshold, are discriminated against. Possibly, if they lived in a region with a larger number of like-minded people, they would enjoy political representation. The party mechanism is also unsatisfactory because people do not know or understand the manifestoes, and the party leaders for their part do not know their supporters. Manifestoes are often a potpourri of “cheap talk”, and moreover not in an academic but in the everyday sense of the expression.
For these and other reasons democracy works less well than it might. In the elections fewer political groups are participating than is essential for the process to work properly. Those which win prove unstable in their composition with the result that their work is less satisfactory. A political group is a means of aggregating the will of individuals and delegating it upwards for implementation. If the interests within the group are chaotic, they will cancel each other out and, instead of strength, what will be projected to the outside world is impotence (Mancur Olson). We can bring the ideal of democracy closer if we find a method of delivering political representation of many interest groups, each with its own clearly delineated interests. Then any citizen would have his own empowered representatives in the areas which matter to him, whether professional, educational, leisure, or other. Despite the utopian nature of the idea, it is gradually maturing in people’s minds. The point is that a person, being a multifaceted individual, could divide his vote between the leaders of several electoral associations he favoured, each of which would specialise in particular interests: ethnic, business, cultural or sport, as long as it could assemble a sufficient number of supporters. In order to participate in these coalitions, a person would need to subscribe a quota of personal resources to each of them, thereby registering the degree of his interest, and naturally he would have to decide to which organisations he needed to, and could afford to, belong. The number of interest groups cannot proliferate infinitely. The process of fragmentation of society into parties is self-regulating. Moreover, the real (solvent) demand for democracy is revealed, and the extent to which society can afford it becomes visible. As a result, a political system would emerge which consisted of many blocs crisscrossing the electorate, each representing different aspects of their voters’ interests. This scheme describes an extreme but imaginable development of democracy. The idyll is unlikely ever to be realised, not least because of the QWERTY effect, but it is nevertheless profoundly in harmony with the new group economy. Even if it will take a considerable time before actual moves in this direction occur, collaborative filtering (and we cannot do without it) will gradually begin to be enlisted in political practice. Since this is a technology for forming any group, it is in principle applicable also to political associations. One can foresee the following potential effects: (1) voters would group more meaningfully, since groups will be formed in full view of them; (2) a mutual adaptation of candidates to voters and vice versa would be observed.
The creation of political groups is an unusual task to set collaborative filtering, and in order to perform it the technology will need to be modified. However, the underlying principle of establishing the proximity of people from the similarity of their experience and preferences is unchanged. In all the spheres for applying collaborative filtering which we have looked at up till now the talk was of “experience goods”, badly categorised and which it is simpler and cheaper to experience than to try working out what they amount to. A major consideration when choosing is taste. We might think political figures have a lot in common with experience goods, but they do not lend themselves to collaborative filtering. Candidates are hardly the equivalent of works of art, although the sympathies evoked by the screen and poster images in which they are presented to voters are almost decisive in terms of electoral results. Neither are cross recommendations a viable option, where proximity of taste neighbours’ preferences in other fields is transferred to politics. At least for the time being it is not clear which similarities could be used to ensure an appropriate transfer, although that is not to say that we may not discover such parallels in the future. To cap it all, if an election is being held there is no time for candidates to be “consumed” and rated by others, enabling their experience to be exploited, as is standard practice in collaborative filtering. Electing a governor or a president is not the same as choosing a book, if only because the act of electing is a one-off and the next attempt is far removed in time.
Another option might be to stitch together groups of politically like-minded people on the basis of their rating of political personalities, but this will not work either because the experience of consuming regional leaders and presidents is one-off and there is little carry over of relevant knowledge. If all electors in their lifetime saw at least a couple of dozen political personalities in the same political role, it might be possible to compile a profile from their ratings which would enable everything to proceed in accordance with the norms of collaborative filtering, but that is not the way things are. It is tempting to try to create a political profile of citizens from their attitude towards particular political events, but these would have to be electors who took a very close interest in political life.
It is, nevertheless, possible to use a modified version of the collaborative method in the political process. This is obvious. The technology is part and parcel of the group economy and can be used to form groups of any description if we choose the right filters. The adaptation needed for politics is that what is needed is not a filtering of preferences (although they can and should be taken into account), but computation of proximity of the criteria people believe should be used to rate candidates. The collaborative system’s most basic ability is calculating proximity, and that can be computed on the basis of any assortment of characteristics, including political views.
People could be grouped on the basis of what they want from politicians. Their expectations can be formulated in ordinary, readily comprehensible language, without complicated terminology and beginning with such basic coordinates as “left-right”, “socialism-capitalism”, “liberalism-statism”, “hawk-dove”. Suppose a list has been made of what is desired or required of candidates. The voter puts this in order of his personal priorities. It is fine if he chooses to ignore certain attributes, in just the same way that people do not have to have read exactly the same books in order for the system to compute proximity in terms of literature. The list could, for example, include such personal characteristics of the future leader as sex, age, charisma, pragmatism, managerial and political experience. It might include his proposals on financial and economic, social, industrial or fiscal policy. The characteristics could initially be provided by experts, but with time they would be augmented by points identified by voters themselves. This is precisely the way tag navigation on the internet is constructed, which consists of short marker phrases which users attach to particular materials. A user’s political profile would be compiled from the priorities he himself had indicated, and then fed in for standard processing by the collaborative algorithm. When a computer programme is calculating the proximity of series of numbers it is entirely indifferent to what the numbers stand for: the rating of art works or political demands. In one case the output will be reference circles based on aesthetic taste, and in the other similar circles of politically like-minded people.
This modification of collaborative filtering is already being used to select consumer goods and services, particularly electronics and hotels. Take as an example the choice of a professional camera. The major considerations here are not only taste preferences, but also pragmatic aspects like price, the purposes for which and the conditions under which photographs are expected to be taken. The best advisers will be purchasers who were guided by the same considerations. It is, of course, also extremely helpful to discover what kind of considerations these were. It is from these people that the reference group should be drawn which will advise the user, and with which he will have the opportunity of socialising.
This variety of collaborative filtering is also suitable for searching for professional literature, a task which to this day is coped with incredibly badly. When selecting proximate advisers, priority should be given not to past ratings (although that information is not irrelevant), but to the criteria the potential taste neighbour currently employs in selection. The publications may be cutting edge or popular, may attract readers by the style of their exposition or their interesting presentation, and may be of interest or not depending on the specific enquiry. The most valuable asset is the experience of people guided by the same considerations.
Using collaborative technologies in politics may have a number of useful spin-offs. For example, they can be used to establish how far a manifesto corresponds to groups’ aspirations, all by measuring the proximity of series of numbers which code priorities. This will make it easier for voters to see who they are voting for and why. It will also indicate the gap between the wishes of an individual and the pre-election platform of the activists of a group he is considering joining. There is a temptation to go beyond this and calculate the optimal manifesto – the one which best balances the aspirations of all the groups, – but this is likely to prove fruitless. Kenneth Arrow has shown that in a democracy it is impossible to calculate an optimum of social well-being which would equally satisfy everybody.
What holds out the greatest promise, however, is the fact that in politics as nowhere else it is important to be informed and to have access to informed commentary, key elements of the New Economy. Group filtering throws light on much of what is emerging in the depths of society. The willingness of a person to join a particular group or party is directly affected by knowing how other people are reacting to information, and knowing their intentions. This is precisely the way we weigh up the prospects of a particular association. Election results depend not only on what people have heard about candidates and been told in their statements. No less important is whether they know who other people are going to vote for, and what considerations they are guided by. An admirably conceived group will not become popular if it does not show the requisite pace of growth. Good dynamic characteristics are in themselves a catalyst for voters.
Publicly available information about potential supporters of particular initiatives is decisive in determining the viability of civil and political associations. Before attempting to implement anything or joining any group, it is advisable to weigh up the likelihood of its attracting a sufficient number of supporters. Without that, all efforts will be wasted. An initiative may fail to take off, not because nobody is interested but purely because potential supporters are too dispersed. No businesslike initiative can afford not to attract the necessary social backing. The autocrats of old Russia were well aware of this secret and throughout Russian history kept socially active groups divided. When the favourites of the tsars were rewarded with lands, their patrimonies were scattered over an enormous area to prevent their owners acquiring a local power base.
A comprehensible explanation of all proposals, including the cost of supporting them. This is needed to enable each individual to choose for himself what is most important and to direct his efforts intelligently;
Information about the minimum essential or optimal number of supporters required to make the project feasible and able to start;
Information online about the rate of increase of the number of people expressing a provisional intention of joining, and the “cost” of participating given the numbers currently listed.
It is critically important to state these conditions plainly and to make clear the moment at which a membership application assumes the form of a binding contract. Collaborative filtering is ideally suited to coping with these tasks, especially where it is desirable to know not only how many have signed up but also to have a deeper knowledge of who they are. If this kind of practice were adopted it would complicate the game of politics, but there is no simpler way apparent for balancing interests against each other without crushing any of the groups.
Two further important functions for the collaborative space are a 24/7 feedback channel for voters’ opinions, and a spokesperson for the feedback. Thus, a dynamic rating and reputation can be derived for the person eventually elected based on the online ratings of citizens which indicate how far actual policy is corresponding to pre-electoral promises. (This is clearly a threat for politicians.) It is also plain that collaborative filtering is a two-edged sword: knowing the views of the public may make it easier to pander to them. However, the strength of the institution is that (a) leaders will have to compete openly; (b) they will not be able to control a multi-million social arbiter.
The measurement of subjective time by using second money affords an opportunity to calculate the results of symbolic exchange, and in particular its impact on culture. If germane methods appear in culture for measuring symbolic value, natural indicators and regulators or processes will arise which will allow symbolic markets to escape from their domination by commerce. Money for culture should not be generated artificially outside of culture and presented to it on a plate. It should emerge only from evaluation of subjective time through joint efforts. At first this will be done only through ratings, but gradually these will be replaced by voluntary post-factum payments, or second money.
Open Access. This chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.