Abstract
With the advent of the new era of Artificial Intelligence, we need to update our inferential methods in economics and the social sciences accordingly. The implementation of a slightly realistic consideration will easily reveal to us a very large domain. In this article, we employ the AI market simulation system called U-Mart to model the efficiency of a realistic futures market. In the actual market, participating agents send orders either randomly or non-intelligently, even though they depend on their own unambiguous strategies. It has been noted that purely random orders often result in the best performance in the market. Thus, the market system may have many redundancies. Although we cannot know in advance an optimal solution in advance, we may form a winning strategy. In a sense different from efficiency market hypothesis, we can thus affirm a certain statement on the efficiency of the market. This kind of analysis is essentially similar to the idea of Fully Random, Rule-Based Interactive Cellular Automata (ICA), which is based on Alan Turing’s rule selection. Based on this hint, we can now find a particular agent set to realize a futures price series almost similar to the spot price series. This agent configuration set has been already identified by Nakajima and Mori (Design of experimental environment for artificial financial market. Mimeo, New York, 2005) and may provide us with a special reference point like the fundamentals. Thus, we call this set the standard agent configuration (StdAC). It should be noted that the traditional “fundamentals” are not internally decided by the market. On the other side, StdAC will play its role as the fundamentals of price formation. We finally employ this set to detect a critical configuration from where the futures price series is divergent. Thus, we confirm a new approach to study market mechanism in this context.
Similar content being viewed by others
1 Toward rule selection and Turing’s idea
When the number of states is large, it may be challenging for us to formulate an appropriate rule. It then seems difficult to attain the desired purpose and the evolution of the system without finding any computable rule. Therefore, in this paper, we research the application of simulations, but in a smarter manner what currently exists (Fig. 1).
1.1 Wolfram interactive cellular automaton
To examine the rule selection, we cite the experiments on elementary cellular automata. In particular, by the publication, A New Kind of Science” Wolfram (2002), it turned out that the automaton of Wolfram rule 110 fulfills the criteria of Turing completeness. This is among major problems that interested Wolfram. Now, the rule 110 and the similar rules are being explored (Table 1).Footnote 1
The cellular automaton has a simple structure, but it is potent in terms of generating complicated behavior similar to those in Class 4. The figures of Classes 1–4 are reproduced in Fig. 2.
Cellular automata (CA) can be classified according to the complexity and information produced by the behavior of the CA pattern:
Class 1: Fixed; all cells converge to a constant black or white set Class 2: Periodic; repeats the same pattern, like a loop Class 3: Chaotic; pseudo-random Class 4: Complex local structures; exhibits behaviors of both class 2 and class 3; likely to support universal computation (Carvalho 2011).
By resorting to the Fully Random, Five-Rule Interactive Cellular Automata (ICA) Mitchell and Beyon (2011), we can easily examine the effects of the heterogenous interactions of rules. We employ a reduced version of the Five-Rule ICA, i.e., the Three-Rule ICA, to examine the effects of the heterogenous interactions of three different rules to easily compare the effects due to heterogeneous interactions of different rules around the rule 110. When only the rule dynamics are considered, we can analyze the effects of rules on the overall dynamics (Fig. 3).
One possible application of FRICAs is a more refined classification system based on, for instance, how damaging the inclusion of a given rule is to the universal behavior of rule 110. It is also possible that systems in nature mimic the process of choosing randomly for each operation from a limited set of functional rules.
Using the Five ICA, we can determine the effects when rule 110 is increased gradually to five. The initial distribution of the 5 rules is set as {Rule 23, Rule 183, Rule 18, Rule 238, Rule 12}. The first component of the initial distribution is replaced with Rule 110 to be {Rule 110, Rule 183, Rule 18, Rule 238, Rule 12}. By applying the same procedure to the last result, the second component is also replaced with rule 110 to yield {Rule 110, Rule 110, Rule 18, Rule 238, Rule 12}. By repeating a similar procedure, we finally obtain a set in which all components are rule 110. We can observe the different interactive heterogenous rules, as the number of rule 110 varies (Fig. 4).
2 Market mechanism with redundancies and a deeper logic of complexity
As Mainzer (2007), a philosopher of science, recommended, the idea of creative coincidence in the human history can be applied to technological innovation. Then, the application of creative coincidence to innovation suggests a new idea that is replaced with J. Schumpeters creative destruction, see Aruka (2009) for more details.
A momentum of creative coincidence will be revealed by examining the logical depth. Mainzer originally studied the Turing machine. The logical depth may be defined in the following manner. With the algorithmic probability \(P_s\) for a randomly generated program’s output, we now have a measure of the logical depth of s at our disposal. A sequence s has logical depth when the largest proportion of \(P_s\) is contributed by short programs that require a large number of computational steps for the production of \(P_s\). The DNA sequences that have evolved over millions of years with many redundancies and contingencies can survive by generating compact programs that require an enormous amount of computational steps for the development of the entire description of a complex organism. In this sense, they have great logical depth, the depth of information generated by a long and complex evolution.
In view of engineering, a complex system with a deeper logic may be interpreted with such a program that has various elaborations at each implemented stage. A proper degree of redundanciesFootnote 2 is rather indispensable for generating innovation. In this sense, a series of small, new inventions will assure a great innovation. In other words, a deeper logic in engineering may imply plentiful, higher precisions, and high-accuracy elaborations. These elaborations in essence are irrelevant to either a pecuniary motive or a market mechanism. It is trivial that such an idea connected with a new invention that is often rooted in our traditional techno-culture. This point of view may be much encouraged by Brian Arther, as we describe in the next subsection.
2.1 Market mechanism in evolution
We follow the essential idea of Arthur (2009) who believes that technology is a superclass of economy.Footnote 3 In this context, we can say that technology creates itself out of itself.Footnote 4 His idea also applies to the market mechanism.
As Arthur (2009) also illustrated, the financial market did not prepare a new system for the safe option market. In contrast, the so-called renovation of financial business was then feasible, because computers evolved to solve the complicated risk calculations that were needed for options transactions (Arthur 2009, 154). The financial market is now exposed to high-frequency trading/transaction (HFT). However, the HFT is also a product of the evolution of computability. Market theory is utterly irrelevant to the evolution of computer. The evolution of computer has realized HFT. Computers evolved to solve the complicated transactions that were needed for the HFT transactions either in stock exchanges or in currency exchanges. It must be noted that HFT will be changing the institutional setting of the transaction. The prestige of a seat at the exchange is diminishing, because high spec servers are endowed virtually the same membership as that at the stock exchange. More generally, technological innovations can change the qualities of transaction. It has been remarked that market theory does not specify how the actual market system is constructed. In this sense, the existing market theory never been proven to exist.
In examining the example of Sake Brewing, there are two ways of brewing: batch and continuous polymerization. The quality and taste of Sake become different when a different brewing process is used. It has been known in the market auction that there are two ways of matching: batch and continuous double auction. The different auction methods apparently bring different results. In the reality of the exchange, even a market equilibrium does not necessarily hold. In the older stock exchanges prior to computer processing, the closing time was often extended to allow them markets to settle down. Actually, it takes much time to arrive at a settlement price. The continuous double auction is generally regarded as an on-the-spot decision to find a matching as soon as possible. The stock exchange is alway monitoring the time series of on-going transactions and seeks to find matches by suggesting a current price band within which current orders could be successfully settled. This is a kind of market engineering.
The mathematical reasoning of the market process skips through the engineering part of the process. Even if given a particular bundle of binding devices, which the stock exchange cultivated for many years, the dealers cannot always arrive at an equilibrium state by themselves. Any guarantee will not be secured simply by assuming a black box for the market. The architecture of a market must be described each time when a particular market is discussed (see Aruka 2017). The market architecture is indispensable to establish equilibrium. A mathematical statement of the existence of an equlibrium tells us nothing about the reality of the market mechanism. Mizuno and Watanabe (2010) and Mizuno et al. (2010) , who are Japanese econophysicists, have already verified that the results generated by the online market system called “KAKAKU dot Com” (http://kakaku.com) do often not satisfy the conditions of perfect competition as the market theory recommends. It has been noted that the the online market is always designed to fulfill the conditions of perfect competition. Contrary to traditional markets, in the online market, some firms that adopt a price greater than the lowest price will never be driven away from the market over the course of a year.
2.2 A short history of Japanese commercial engineering
In Japan, the shift to the capitalistic mode of production has been certainly prepared by a series of matured circumstances of various spheres until at least 17th century when the international financial institutions and networks were formed. The underlying industrial capacities, financial capacities, and commercial networks were all sufficient in developing a full-fledged launching point for capitalism. The most typical motivation of the economic and commercial organization in Japan was the Osaka-Dojima Rice-Stamp Exchange. These factors cannot be self-organized by the market forces.Footnote 5 Often, these were outcomes of creative coincidences connected with historically ingenious persons. In the case of Osaka-Dojima system, the figure was YODOYA.Footnote 6
In the brilliant entrepreneurial, the YODOYA family developed both the direct transaction of rice and the indirect transaction by way of the bill exchange of rice stamps during 17th century in Japan. This devise was fully implemented in the institutions that supported modern financial speculation. In particular, the bill exchange system brought YODOYA great wealth. In 1730, the indirect exchange was officially taken for granted by Shogun Government of Japan.Footnote 7
This year marked the world-first establishment of a modern futures market system. The architecture of the futures market was in fact prepared by the Japanese. This may be regarded as a technological innovation.
The U-Mart system is an artificial intelligent futures' transaction system with a long-run lifetime that was initiated by Japanese computer scientists in 1998 (see Aruka 2015, pp. 111–112; Shiozawa et al. 2008). This system is compatible with both types of batches and continuous double auctions. Moreover, either human agent or algorithm agent can join in the system. The two eminent properties were equipped with the U-Mart system at the beginning. One is the participation system of hybrid agents. After the U-Mart system was released, the reality was closer to the U-Mart. The other is the implementation of the acceleration experiment tool.Footnote 8 The latter was indicative of the dominance fo the HFT. In this section, we use the acceleration experiment tool. In our context, in an event, it is specially noted that our system originally designed as a virtual system that has turned into an actual realized system. This is called “Equity Index Futures” at the Osaka Exchange, which is a branch of JPX.Footnote 9
The development of the U-Mart system was mainly engineer-driven,Footnote 10 and is now internationally recognized as a good platform for AI markets. The source code of the project is open for the public.Footnote 11
3 Examining the futures market by the U-Mart simulation in the default agent configurations
One of the most interesting features of the market transaction is that zero intelligent agent is a dominant frequent winner of the market game. It has also been easily verified in the U-Mart system that the random agentFootnote 12 is often the winner. This is a reason why we should doubt the traditional idea that any rational/intelligent behavior can optimize the performance of the market. It is also interesting for us to notice that equilibrium cannot not be established without dropping the assumption of homogeneous agents. If all agents select the same behavior of sell, or buy, there may not be any settlement. Therefore, it matters to us what types of agents are implemented. These considerations will motivate us to conduct a realistic simulation using the U-Mart system.
As argued in the last section, an evolving system in which the participating agents are heterogenous and mutually interact may be a system with many redundancies. In our realistic simulation, the market is an evolving system in which the initial conditions will bring similar results, even though the various heterogenous agents are either intelligently or randomly interacting. In such a complex system, as already discussed, the ICA will deal with the problem on “how damaging the inclusion of a given rule is to the universal behavior”. Thus, the ICA tries to repeat a similar procedure to obtain a certain effect based on the universal rule by referring to the different interactive rules.
Now, we incorporate Class 4 as defined in the first section into the market system. In the market, at first, various types of participants are locally formed and then mutually interact in complex and interesting ways. They form local structures that are able to survive for long periods of time. In the first section, the Wolfram ICA simulations examined the attractor formation. Conversely, in the market experiment, we will detect any sensibility that is generated in a relationship between an initial strategy configuration (ISA) and its final performance configuration (IPA). A final performance is represented by some special form. We will examine whether a final performance configuration is sensitive to its initial strategy configuration or not. Then, the shape retention of the performance configuration among the initial strategy configurations and among the experimental modes is addressed.
Here, we rearrange the previous terms that we used in this context. A rule corresponds to a strategy (or agent). The initial distribution is strategy composition. A different interactive rule may find a new mode in our market experiment. By mimicking the rule-based ICA, we thus prepare for several agents to be tested among the traditional technical agents in the following experimental design.
3.1 Experimental design
Our experiment will be conducted in three different modes from Experiments 1 to 3. We also examine the effects in several different initial configurations.
3.1.1 Different experimental modes
-
Experimental mode 1: Individual match, i.e., each new agent will enter in a round robin tournament against the given technical agent configuration.
-
Experimental mode 2: Participation by all the members, i.e., all agents including a set of new agents will enter into a round robin tournament against the given technical agent configuration.
-
Experimental mode 3: half of a given set of agents (including all the new agents) are randomly chosen and matched.
In all the experiments, we apply different spot price series by extrapolating data outside the simulation system. In this experiment kit, the contract date of the futures transaction is set 2 months each round. Then, each trading period is then 60 days. Thus, the kit will conduct 10 rounds for each type of spot price time series (Fig. 5):
-
Descending series: the spot price time series with a large downward trend in the long run.
-
Oscillating series: the spot price time series with a large oscillating in the long run.
-
Reversal series: the spot price time series that descends and then ascends in the long run.
-
Ascending series: the spot price time series with a large upward trend in the long run.
3.1.2 Different initial configuration
The default strategies of the U-Mart system are given in Table 2 later. We also examine the effects due to three different initial strategy configurations.
-
Initial Strategy Configuration 1 (ISC1): The default strategy configuration in the U-Mart system.
-
Initial Strategy Configuration 2 (ISC2): The configuration that removes all random strategies, i.e., Random and SRandom.
-
Initial Strategy Configuration 3 (ISC3): The configuration that removes all the agents other than random agents, i.e., the configuration composing of random agents except for MyAgents.
It can be seen that ISC3 contain a more randomness, because there are only Random and SRandom agents except for MyAgents.
Under the above prescriptions, we adopt the default agents to run the U-Mart system. First, we show their profiles in Table 2.
We also give the distribution list of traditional technical agents n Table 3.
We also give the distribution list of traditional technical agents as follows: Table 3.
In our experiments, we have chosen SFSpreadStrategy as opponents to the given list of strategies. In the experimental tool kit, as usual, an agent newly designed is added. However, in this article, we focus on SFspreadStrategy as a newly added agent. Two agents of SFspreadStrategey are implemented as MyAgent. The behaviors of this agent are akin to human agents who prefer risk averting. In the event, the number of SFspreadStrategy is 4 in total. However, interestingly, we will see that a newly added agent of SFspreadStrategy is not guaranteed to win, although the other added agent of the same type is ranked as top in Pareto ranking.
3.1.3 Pareto ranking
We then evaluate the market performance each experiment by employing 4 different measures: maximum profit, mean profit, winning times, and bankruptcy rate.Footnote 13 In the U-Mart assessment, we usually evaluate the performance of each of the 4 measures by ranking them over all strategies. It is noted that we need the multi-objective method to conduct the Pareto ordering of the agent strategies for the four-dimensional objectives. Fortunately, our experimental kit provides us with an automatic generation of the Pareto rankings of the four objectives.
In Fig. 6, the circled points indicate the top 3 strategies in the Pareto ranking: SFSpreadStrategy1, SFSpreadStrategy2, and SRandomStrategy0001. Figure 6 shows that the two SFSpread strategies are Pareto-dominant in the 3 multi-objective space in the experimental mode 1, as shown in Table 4.Footnote 14 The second most dominant strategy with respect to Pareto dominance is SRandomStrategy0002 (Table 5).
3.2 The acceleration experiments
In the U-Mart system, matching orders in the market are of the hybrid type. That is, the traditional agent that are usually called technical analytical agents will match human agents. One of our purposes is to examine the characteristics of agent strategies using experiments. However, the games with human agents are not suitable for long-term experiments. Thus, we adopt the acceleration experiments without human agents. The measures of the simulation results are ordered according to the ranking of their performance from top to bottom in ascending order. Since we employ four measures, the evaluation is achieved using the multi-object method that was already mentioned. Now we shall list the simulation results of each mode of the experiments using radar charts, as shown in Figs. 7, 8, and 9.
The simulation results are summarized in Table 6. We focus on any shape retention of performance configuration, i.e., among the initial strategy configurations and among any experimental modes in the radar charts. Table 6 indicates that there may be shape retention between ISC1 and ISC2, between Mode 1 and Mode 2, and between Mode 1 and Mode 2 under ISC3, and between ISC2 and ISC3 under Mode 3. Using the acceleration experiments of the U-Mart system, we detected a block matrix \([\text{ ISC }_i, \text{ Mode }_i] \, i=1,2\). This matrix block represents the next four radar charts: Figs. 7a, b, 8a, b.
-
1.
Here, we found a relatively similar configuration of strategies that is insensitive to neither the experimental modes nor the initial strategy configurations.
-
2.
It is also shown that the SFspread strategy could not absolutely dominate the SRandom strategy, as shown in Figs. 7a, b, 8a, b. An SFspread can dominate an SRandom, but every SFspread cannot dominate any SRandom. The corollary also holds true.
4 Identifying the fundamental agent configuration to realize any spot price series
The agent configurations examined in the above discussion cannot guarantee an attractive price series like the fundamentals. However, we are now ready to establish a special reference to realize any futures price series similar to a given spot price series. This special agent configuration was already investigated in an insightful simulation by Nakajima and Mori (2005) that resorted to powerful simulations in the U-Mart system.
4.1 StdAC: the standard agent configuration
First, we show that the agent configuration of Nakajima and Mori (2005) has given in the following manner:
We call this agent the standard agent configuration (StdAC). The traditional “fundamentals” are not internally decided in the market. The market fundamentals must work as a center of gravitation of the market. This idea must then be internally defined inside the agent set to work in the concerned market system. Thus, the composite set of agent strategies should be used as the fundamentals of the market, in the sense that this set can always realize the price series similar to a given spot price series.
We finally employ this set to detect a critical configuration from where the futures price series diverges. Thus, we confirm a new approach to study the market mechanism in this context.
4.2 The simulation results of convergence and divergence around the Std AC of technical strategies
We employ the Std AC using the half scale of the original Nakajima-Mori figures mentioned in Table 7 to apply the Experimental mode 2 that was defined in Sect. 3.1.1. We show the simulation results of this environment. Changing the experimental mode will not cause any large variations in the results. Here, we examine the StdAc by means of the spot price series used in Fig. 5.
Next, we add 23 bodies of SRandomStrategy agents to the StdAC that is currently adopted. There is not a discernible divergence from a given spot series. However, it is trivial for P values that the proximity between spot and futures are worse for each given spot price series (Figs. 10, 11, 12).
Finally, after removing the SRandom agents from the StdAC, we add 46 RandomStrategy to the StdAC that is currently adopted. The SRandom strategy is defined to randomly place orders around a given spot price, while the Random strategy is the strategy that places orders (sell and buy) without any reference to a given spot price. In this case, the divergence between the spot and futures prices becomes obvious.
4.3 A brief report of Pareto rankings in the average given the new fundamental agent configuration
Using a similar approach as that above for the Pareto rankings among agents of Std AC, we roughly examine their earning capabilities. For simplicity, we compare the average ranking each strategy.
As long as we employ Std AC, the SF spread is always suppressed smaller values. This is the reason why SFSpread strategies can earn most effectively. Due to a similar property, SRandom strategy may also earn almost equally.Given a big divergence between spot and futures, these advantage will break down, as shown in Fig. 13c.
5 Concluding remarks
As we sated at the beginning of this article, we examined any sensitivity of performance configurations among the initial strategy configurations and among the experimental modes. Furthermore, by resorting to an intelligent idea of the standard agent configuration (StdAC), we virtually confirmed convergent/divergent behaviors between the spot and futures price series. Thus, we suggest a new approach in agent-based market simulations.
Notes
“There are 256 such automata, each of which can be indexed by a unique binary number whose decimal representation is known as the “rule” for the particular automaton”, see http://mathworld.wolfram.com/CellularAutomaton.html.
The redundancy of a mathematical system means an excessive degree of freedom. Redundancies in this context mean uncertainties, especially in their availability and usefulness.
It arose from the productive methods and legal and organizational arrangements that we use to satisfy our needs. Therefore, it is issued from all these captures of phenomena and subsequent combinations (Arthur 2009, 3).
Early technologies formed using existing primitive technologies as components. These new technologies in time become possible component building blocks for the construction of further new technologies. Some of these in turn go on to become possible building blocks for the creation of yet newer technologies (Arthur 2009, 21).
Of course, the market force could serve as a complementary assistance.
See “Yodoya-Komeichi,” the first securities exchange in the nation in Wikipedia article http://marketswiki.com/wiki/Osaka_Exchange.
See Wikipedia articles on “Dojima RiceExchange” and “Osaka Securities Exchange”: https://en.wikipedia.org/wiki/Osaka_Securities_Exchange; https://en.wikipedia.org/wiki/Osaka_Securities_Exchange.
This kit was publicized when U-Mart System ver.2 was released. However, the newest version of the U-Mart System is ver.4. The new experimental tool kit will be released soon when latest version is finally confirmed.
JPX article(Sept 19, 2017): 10th Anniversary of Equity Index Futures and Options Nighttime Trading http://www.jpx.co.jp/english/corporate/news-releases/0060/20170919-01.html.
see the web site of the U-Mart Organization http://www.u-mart.org/html/index.html.
U-Mart started in 1998 as V-Mart (Virtual Mart) but is now called U-Mart, which is the abbreviation of Unreal Market as an Artificial research Test bed. The U-Mart Project has already many publications either in Japanese or in English. We note two English books on U-Mart. Shiozawa et al. (2008) published a volume of the Springer Series on Agent-Based Social Systems. The last book is Kita et al. (2016) as a volume of Evolutionary Economics and Social Complexity Science, see http://www.springer.com/series/11930.
Speaking precisely, an SRandom agent is one who sends random order around the spot price fluctuation.
The bankruptcy does not often occur. In our experiments, any bankruptcy will not be observed. Thus, this measure may be removed in our figures.
We removed the bankruptcy from the measures.
References
Arthur WB (2009) The nature of technology. Free Press, New York
Aruka Y (2009) Klaus Mainzer, Der kreative Zufall: Wie das Neue in die Welt kommt (The Creative Chance. How Novelty Comes into the World (In German)), C.H. Beck, München, 2007, 283 pages. Evolut Inst Econ Rev 5(2):307–316
Aruka Y (2015) Evolutionary foundations of economic science: how can scientists study evolving economic doctrines from the last centuries? (Springer series: evolutionary economics and social complexity science, vol 1). Springer, Tokyo
Aruka Y (2017) Special feature: preliminaries towards ontological reconstruction of economics-theories and simulations. Evol Inst Econ Rev 14(2):409–414
Carvalho DS (2011) Classifying the complexity and information of cellular automata. http://demonstrations.wolfram.com/ClassifyingTheComplexityAndInformationOfCellularAutomata/
Kita H, Taniguchi T, Nakajima Y (2016) Realistic simulation of financial markets analyzing market behaviors by the third mode of science (Springer series: evolutionary economics and social complexity science, vol 4. Springer, Tokyo
Mainzer K (2007) Der kreative Zufall: Wie das Neue in die Welt kommt. C.H. Beck, München
Mitchell E, Beyon T (2011) Fully random, five-rule interactive cellular automata (ICA). http://demonstrations.wolfram.com/FullyRandomFiveRuleInteractiveCellularAutomataICA/
Mizuno T, Nirei M, Watanabe T (2010) Closely competing firms and price adjustment: some findings from an online marketplace? Scand J Econ 112(4):673–696
Mizuno T, Watanabe T (2010) A statistical analysis of product prices in online market. Eur Phys 3B(76):501–505
Nakajima Y, Mori N (2005) Design of experimental environment for artificial financial market. Mimeo, New York
Shiozawa Y, Nakajima Y, Matsui H, Koyama Y, Taniguchi K, Hashimoto F (2008) Artificial market experiments with the U-mart system (Springer series on agent based social systems, vol 4. Springer, Tokyo
Wolfram S (2002) A new kind of science. Wolfram Media, Champaign
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
This article is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team.
About this article
Cite this article
Aruka, Y., Nakajima, Y. & Mori, N. An examination of market mechanism with redundancies motivated by Turing’s rule selection. Evolut Inst Econ Rev 16, 19–42 (2019). https://doi.org/10.1007/s40844-018-0115-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40844-018-0115-8
Keywords
- Interactive CA
- Rule selection
- U-Mart system
- Acceleration experiment
- Random strategy
- Standard agent configuration (StdAC)