1 Toward rule selection and Turing’s idea

When the number of states is large, it may be challenging for us to formulate an appropriate rule. It then seems difficult to attain the desired purpose and the evolution of the system without finding any computable rule. Therefore, in this paper, we research the application of simulations, but in a smarter manner what currently exists (Fig. 1).

1.1 Wolfram interactive cellular automaton

To examine the rule selection, we cite the experiments on elementary cellular automata. In particular, by the publication, A New Kind of Science” Wolfram (2002), it turned out that the automaton of Wolfram rule 110 fulfills the criteria of Turing completeness. This is among major problems that interested Wolfram. Now, the rule 110 and the similar rules are being explored (Table 1).Footnote 1

Table 1 Set of rules of the rule 110 automaton
Fig. 1
figure 1

Binary-colored presentation of rule 110. *Black squares: 1; white squares: 0

The cellular automaton has a simple structure, but it is potent in terms of generating complicated behavior similar to those in Class 4. The figures of Classes 1–4 are reproduced in Fig. 2.

Cellular automata (CA) can be classified according to the complexity and information produced by the behavior of the CA pattern:

Class 1: Fixed; all cells converge to a constant black or white set Class 2: Periodic; repeats the same pattern, like a loop Class 3: Chaotic; pseudo-random Class 4: Complex local structures; exhibits behaviors of both class 2 and class 3; likely to support universal computation (Carvalho 2011).

Fig. 2
figure 2

*Cited from Wolfram (2002): http://www.wolframscience.com/nks/p231-four-classes-of-behavior/

Class 1–4.

By resorting to the Fully Random, Five-Rule Interactive Cellular Automata (ICA) Mitchell and Beyon (2011), we can easily examine the effects of the heterogenous interactions of rules. We employ a reduced version of the Five-Rule ICA, i.e., the Three-Rule ICA, to examine the effects of the heterogenous interactions of three different rules to easily compare the effects due to heterogeneous interactions of different rules around the rule 110. When only the rule dynamics are considered, we can analyze the effects of rules on the overall dynamics (Fig. 3).

Fig. 3
figure 3

Three ICA. *These figures are produced by Mitchell and Beyon (2011)’s simulator.

One possible application of FRICAs is a more refined classification system based on, for instance, how damaging the inclusion of a given rule is to the universal behavior of rule 110. It is also possible that systems in nature mimic the process of choosing randomly for each operation from a limited set of functional rules.

Using the Five ICA, we can determine the effects when rule 110 is increased gradually to five. The initial distribution of the 5 rules is set as {Rule 23, Rule 183, Rule 18, Rule 238, Rule 12}. The first component of the initial distribution is replaced with Rule 110 to be {Rule 110, Rule 183, Rule 18, Rule 238, Rule 12}. By applying the same procedure to the last result, the second component is also replaced with rule 110 to yield {Rule 110, Rule 110, Rule 18, Rule 238, Rule 12}. By repeating a similar procedure, we finally obtain a set in which all components are rule 110. We can observe the different interactive heterogenous rules, as the number of rule 110 varies (Fig. 4).

Fig. 4
figure 4

Five ICA. *These figures are produced by Mitchell and Beyon (2011)’s simulator

2 Market mechanism with redundancies and a deeper logic of complexity

As Mainzer (2007), a philosopher of science, recommended, the idea of creative coincidence in the human history can be applied to technological innovation. Then, the application of creative coincidence to innovation suggests a new idea that is replaced with J. Schumpeters creative destruction, see Aruka (2009) for more details.

A momentum of creative coincidence will be revealed by examining the logical depth. Mainzer originally studied the Turing machine. The logical depth may be defined in the following manner. With the algorithmic probability \(P_s\) for a randomly generated program’s output, we now have a measure of the logical depth of s at our disposal. A sequence s has logical depth when the largest proportion of \(P_s\) is contributed by short programs that require a large number of computational steps for the production of \(P_s\). The DNA sequences that have evolved over millions of years with many redundancies and contingencies can survive by generating compact programs that require an enormous amount of computational steps for the development of the entire description of a complex organism. In this sense, they have great logical depth, the depth of information generated by a long and complex evolution.

In view of engineering, a complex system with a deeper logic may be interpreted with such a program that has various elaborations at each implemented stage. A proper degree of redundanciesFootnote 2 is rather indispensable for generating innovation. In this sense, a series of small, new inventions will assure a great innovation. In other words, a deeper logic in engineering may imply plentiful, higher precisions, and high-accuracy elaborations. These elaborations in essence are irrelevant to either a pecuniary motive or a market mechanism. It is trivial that such an idea connected with a new invention that is often rooted in our traditional techno-culture. This point of view may be much encouraged by Brian Arther, as we describe in the next subsection.

2.1 Market mechanism in evolution

We follow the essential idea of Arthur (2009) who believes that technology is a superclass of economy.Footnote 3 In this context, we can say that technology creates itself out of itself.Footnote 4 His idea also applies to the market mechanism.

As Arthur (2009) also illustrated, the financial market did not prepare a new system for the safe option market. In contrast, the so-called renovation of financial business was then feasible, because computers evolved to solve the complicated risk calculations that were needed for options transactions (Arthur 2009, 154). The financial market is now exposed to high-frequency trading/transaction (HFT). However, the HFT is also a product of the evolution of computability. Market theory is utterly irrelevant to the evolution of computer. The evolution of computer has realized HFT. Computers evolved to solve the complicated transactions that were needed for the HFT transactions either in stock exchanges or in currency exchanges. It must be noted that HFT will be changing the institutional setting of the transaction. The prestige of a seat at the exchange is diminishing, because high spec servers are endowed virtually the same membership as that at the stock exchange. More generally, technological innovations can change the qualities of transaction. It has been remarked that market theory does not specify how the actual market system is constructed. In this sense, the existing market theory never been proven to exist.

In examining the example of Sake Brewing, there are two ways of brewing: batch and continuous polymerization. The quality and taste of Sake become different when a different brewing process is used. It has been known in the market auction that there are two ways of matching: batch and continuous double auction. The different auction methods apparently bring different results. In the reality of the exchange, even a market equilibrium does not necessarily hold. In the older stock exchanges prior to computer processing, the closing time was often extended to allow them markets to settle down. Actually, it takes much time to arrive at a settlement price. The continuous double auction is generally regarded as an on-the-spot decision to find a matching as soon as possible. The stock exchange is alway monitoring the time series of on-going transactions and seeks to find matches by suggesting a current price band within which current orders could be successfully settled. This is a kind of market engineering.

The mathematical reasoning of the market process skips through the engineering part of the process. Even if given a particular bundle of binding devices, which the stock exchange cultivated for many years, the dealers cannot always arrive at an equilibrium state by themselves. Any guarantee will not be secured simply by assuming a black box for the market. The architecture of a market must be described each time when a particular market is discussed (see Aruka 2017). The market architecture is indispensable to establish equilibrium. A mathematical statement of the existence of an equlibrium tells us nothing about the reality of the market mechanism. Mizuno and Watanabe (2010) and Mizuno et al. (2010) , who are Japanese econophysicists, have already verified that the results generated by the online market system called “KAKAKU dot Com” (http://kakaku.com) do often not satisfy the conditions of perfect competition as the market theory recommends. It has been noted that the the online market is always designed to fulfill the conditions of perfect competition. Contrary to traditional markets, in the online market, some firms that adopt a price greater than the lowest price will never be driven away from the market over the course of a year.

2.2 A short history of Japanese commercial engineering

In Japan, the shift to the capitalistic mode of production has been certainly prepared by a series of matured circumstances of various spheres until at least 17th century when the international financial institutions and networks were formed. The underlying industrial capacities, financial capacities, and commercial networks were all sufficient in developing a full-fledged launching point for capitalism. The most typical motivation of the economic and commercial organization in Japan was the Osaka-Dojima Rice-Stamp Exchange. These factors cannot be self-organized by the market forces.Footnote 5 Often, these were outcomes of creative coincidences connected with historically ingenious persons. In the case of Osaka-Dojima system, the figure was YODOYA.Footnote 6

In the brilliant entrepreneurial, the YODOYA family developed both the direct transaction of rice and the indirect transaction by way of the bill exchange of rice stamps during 17th century in Japan. This devise was fully implemented in the institutions that supported modern financial speculation. In particular, the bill exchange system brought YODOYA great wealth. In 1730, the indirect exchange was officially taken for granted by Shogun Government of Japan.Footnote 7

This year marked the world-first establishment of a modern futures market system. The architecture of the futures market was in fact prepared by the Japanese. This may be regarded as a technological innovation.

The U-Mart system is an artificial intelligent futures' transaction system with a long-run lifetime that was initiated by Japanese computer scientists in 1998 (see Aruka 2015, pp. 111–112; Shiozawa et al. 2008). This system is compatible with both types of batches and continuous double auctions. Moreover, either human agent or algorithm agent can join in the system. The two eminent properties were equipped with the U-Mart system at the beginning. One is the participation system of hybrid agents. After the U-Mart system was released, the reality was closer to the U-Mart. The other is the implementation of the acceleration experiment tool.Footnote 8 The latter was indicative of the dominance fo the HFT. In this section, we use the acceleration experiment tool. In our context, in an event, it is specially noted that our system originally designed as a virtual system that has turned into an actual realized system. This is called “Equity Index Futures” at the Osaka Exchange, which is a branch of JPX.Footnote 9

The development of the U-Mart system was mainly engineer-driven,Footnote 10 and is now internationally recognized as a good platform for AI markets. The source code of the project is open for the public.Footnote 11

3 Examining the futures market by the U-Mart simulation in the default agent configurations

One of the most interesting features of the market transaction is that zero intelligent agent is a dominant frequent winner of the market game. It has also been easily verified in the U-Mart system that the random agentFootnote 12 is often the winner. This is a reason why we should doubt the traditional idea that any rational/intelligent behavior can optimize the performance of the market. It is also interesting for us to notice that equilibrium cannot not be established without dropping the assumption of homogeneous agents. If all agents select the same behavior of sell, or buy, there may not be any settlement. Therefore, it matters to us what types of agents are implemented. These considerations will motivate us to conduct a realistic simulation using the U-Mart system.

As argued in the last section, an evolving system in which the participating agents are heterogenous and mutually interact may be a system with many redundancies. In our realistic simulation, the market is an evolving system in which the initial conditions will bring similar results, even though the various heterogenous agents are either intelligently or randomly interacting. In such a complex system, as already discussed, the ICA will deal with the problem on “how damaging the inclusion of a given rule is to the universal behavior”. Thus, the ICA tries to repeat a similar procedure to obtain a certain effect based on the universal rule by referring to the different interactive rules.

Now, we incorporate Class 4 as defined in the first section into the market system. In the market, at first, various types of participants are locally formed and then mutually interact in complex and interesting ways. They form local structures that are able to survive for long periods of time. In the first section, the Wolfram ICA simulations examined the attractor formation. Conversely, in the market experiment, we will detect any sensibility that is generated in a relationship between an initial strategy configuration (ISA) and its final performance configuration (IPA). A final performance is represented by some special form. We will examine whether a final performance configuration is sensitive to its initial strategy configuration or not. Then, the shape retention of the performance configuration among the initial strategy configurations and among the experimental modes is addressed.

Here, we rearrange the previous terms that we used in this context. A rule corresponds to a strategy (or agent). The initial distribution is strategy composition. A different interactive rule may find a new mode in our market experiment. By mimicking the rule-based ICA, we thus prepare for several agents to be tested among the traditional technical agents in the following experimental design.

3.1 Experimental design

Our experiment will be conducted in three different modes from Experiments 1 to 3. We also examine the effects in several different initial configurations.

3.1.1 Different experimental modes

  • Experimental mode 1: Individual match, i.e., each new agent will enter in a round robin tournament against the given technical agent configuration.

  • Experimental mode 2: Participation by all the members, i.e., all agents including a set of new agents will enter into a round robin tournament against the given technical agent configuration.

  • Experimental mode 3: half of a given set of agents (including all the new agents) are randomly chosen and matched.

In all the experiments, we apply different spot price series by extrapolating data outside the simulation system. In this experiment kit, the contract date of the futures transaction is set 2 months each round. Then, each trading period is then 60 days. Thus, the kit will conduct 10 rounds for each type of spot price time series (Fig. 5):

  • Descending series: the spot price time series with a large downward trend in the long run.

  • Oscillating series: the spot price time series with a large oscillating in the long run.

  • Reversal series: the spot price time series that descends and then ascends in the long run.

  • Ascending series: the spot price time series with a large upward trend in the long run.

Fig. 5
figure 5

Spot price time series to be implemented

3.1.2 Different initial configuration

The default strategies of the U-Mart system are given in Table 2 later. We also examine the effects due to three different initial strategy configurations.

  • Initial Strategy Configuration 1 (ISC1): The default strategy configuration in the U-Mart system.

  • Initial Strategy Configuration 2 (ISC2): The configuration that removes all random strategies, i.e., Random and SRandom.

  • Initial Strategy Configuration 3 (ISC3): The configuration that removes all the agents other than random agents, i.e., the configuration composing of random agents except for MyAgents.

It can be seen that ISC3 contain a more randomness, because there are only Random and SRandom agents except for MyAgents.

Under the above prescriptions, we adopt the default agents to run the U-Mart system. First, we show their profiles in Table 2.

We also give the distribution list of traditional technical agents n Table 3.

Table 2 Strategy profiles employed as the default strategies in the U-Mart system

We also give the distribution list of traditional technical agents as follows: Table 3.

Table 3 Pareto rankings and other measures in our experimental mode 1 under the default strategy agent set
Table 4 Density distribution of different strategies

In our experiments, we have chosen SFSpreadStrategy as opponents to the given list of strategies. In the experimental tool kit, as usual, an agent newly designed is added. However, in this article, we focus on SFspreadStrategy as a newly added agent. Two agents of SFspreadStrategey are implemented as MyAgent. The behaviors of this agent are akin to human agents who prefer risk averting. In the event, the number of SFspreadStrategy is 4 in total. However, interestingly, we will see that a newly added agent of SFspreadStrategy is not guaranteed to win, although the other added agent of the same type is ranked as top in Pareto ranking.

Table 5 MyStrategies adopted in our experiments

3.1.3 Pareto ranking

We then evaluate the market performance each experiment by employing 4 different measures: maximum profit, mean profit, winning times, and bankruptcy rate.Footnote 13 In the U-Mart assessment, we usually evaluate the performance of each of the 4 measures by ranking them over all strategies. It is noted that we need the multi-objective method to conduct the Pareto ordering of the agent strategies for the four-dimensional objectives. Fortunately, our experimental kit provides us with an automatic generation of the Pareto rankings of the four objectives.

Fig. 6
figure 6

Ranking on the three-dimensional objectives

In Fig. 6, the circled points indicate the top 3 strategies in the Pareto ranking: SFSpreadStrategy1, SFSpreadStrategy2, and SRandomStrategy0001. Figure 6 shows that the two SFSpread strategies are Pareto-dominant in the 3 multi-objective space in the experimental mode 1, as shown in Table 4.Footnote 14 The second most dominant strategy with respect to Pareto dominance is SRandomStrategy0002 (Table 5).

3.2 The acceleration experiments

In the U-Mart system, matching orders in the market are of the hybrid type. That is, the traditional agent that are usually called technical analytical agents will match human agents. One of our purposes is to examine the characteristics of agent strategies using experiments. However, the games with human agents are not suitable for long-term experiments. Thus, we adopt the acceleration experiments without human agents. The measures of the simulation results are ordered according to the ranking of their performance from top to bottom in ascending order. Since we employ four measures, the evaluation is achieved using the multi-object method that was already mentioned. Now we shall list the simulation results of each mode of the experiments using radar charts, as shown in Figs. 7, 8, and 9.

Fig. 7
figure 7

ISA1: Pareto rankings

Fig. 8
figure 8

ISA2: Pareto rankings

Fig. 9
figure 9

ISC33: Pareto rankings

The simulation results are summarized in Table 6. We focus on any shape retention of performance configuration, i.e., among the initial strategy configurations and among any experimental modes in the radar charts. Table 6 indicates that there may be shape retention between ISC1 and ISC2, between Mode 1 and Mode 2, and between Mode 1 and Mode 2 under ISC3, and between ISC2 and ISC3 under Mode 3. Using the acceleration experiments of the U-Mart system, we detected a block matrix \([\text{ ISC }_i, \text{ Mode }_i] \, i=1,2\). This matrix block represents the next four radar charts: Figs. 7a, b, 8a, b.

  1. 1.

    Here, we found a relatively similar configuration of strategies that is insensitive to neither the experimental modes nor the initial strategy configurations.

  2. 2.

    It is also shown that the SFspread strategy could not absolutely dominate the SRandom strategy, as shown in Figs. 7a, b, 8a, b. An SFspread can dominate an SRandom, but every SFspread cannot dominate any SRandom. The corollary also holds true.

Table 6 Effects of the shape retentions of the radar charts

4 Identifying the fundamental agent configuration to realize any spot price series

The agent configurations examined in the above discussion cannot guarantee an attractive price series like the fundamentals. However, we are now ready to establish a special reference to realize any futures price series similar to a given spot price series. This special agent configuration was already investigated in an insightful simulation by Nakajima and Mori (2005) that resorted to powerful simulations in the U-Mart system.

4.1 StdAC: the standard agent configuration

First, we show that the agent configuration of Nakajima and Mori (2005) has given in the following manner:

Table 7 Special agent set is used to realize a futures prices similar to a given spot prices

We call this agent the standard agent configuration (StdAC). The traditional “fundamentals” are not internally decided in the market. The market fundamentals must work as a center of gravitation of the market. This idea must then be internally defined inside the agent set to work in the concerned market system. Thus, the composite set of agent strategies should be used as the fundamentals of the market, in the sense that this set can always realize the price series similar to a given spot price series.

We finally employ this set to detect a critical configuration from where the futures price series diverges. Thus, we confirm a new approach to study the market mechanism in this context.

4.2 The simulation results of convergence and divergence around the Std AC of technical strategies

We employ the Std AC using the half scale of the original Nakajima-Mori figures mentioned in Table 7 to apply the Experimental mode 2 that was defined in Sect. 3.1.1. We show the simulation results of this environment. Changing the experimental mode will not cause any large variations in the results. Here, we examine the StdAc by means of the spot price series used in Fig. 5.

Fig. 10
figure 10

Convergences of Std AC towards each spot price series

Next, we add 23 bodies of SRandomStrategy agents to the StdAC that is currently adopted. There is not a discernible divergence from a given spot series. However, it is trivial for P values that the proximity between spot and futures are worse for each given spot price series (Figs. 10, 11, 12).

Fig. 11
figure 11

Convergences of the Std AC towards each spot series after 23 SRandom agents are added

Finally, after removing the SRandom agents from the StdAC, we add 46 RandomStrategy to the StdAC that is currently adopted. The SRandom strategy is defined to randomly place orders around a given spot price, while the Random strategy is the strategy that places orders (sell and buy) without any reference to a given spot price. In this case, the divergence between the spot and futures prices becomes obvious.

Fig. 12
figure 12

Convergences of Std AC added 46 bodies of Random toward each spot price series

4.3 A brief report of Pareto rankings in the average given the new fundamental agent configuration

Using a similar approach as that above for the Pareto rankings among agents of Std AC, we roughly examine their earning capabilities. For simplicity, we compare the average ranking each strategy.

Fig. 13
figure 13

Std AC: Pareto rankings

As long as we employ Std AC, the SF spread is always suppressed smaller values. This is the reason why SFSpread strategies can earn most effectively. Due to a similar property, SRandom strategy may also earn almost equally.Given a big divergence between spot and futures, these advantage will break down, as shown in Fig. 13c.

5 Concluding remarks

As we sated at the beginning of this article, we examined any sensitivity of performance configurations among the initial strategy configurations and among the experimental modes. Furthermore, by resorting to an intelligent idea of the standard agent configuration (StdAC), we virtually confirmed convergent/divergent behaviors between the spot and futures price series. Thus, we suggest a new approach in agent-based market simulations.