A game-semantic model of computation

The present paper introduces a novel notion of ‘(effective) computability,’ called viability, of strategies in game semantics in an intrinsic (i.e., without recourse to the standard Church–Turing computability), non-inductive, non-axiomatic manner and shows, as a main technical achievement, that viable strategies are Turing complete. Consequently, we have given a mathematical foundation of computation in the same sense as Turing machines but beyond computation on natural numbers, e.g., higher-order computation, in a more abstract fashion. As immediate corollaries, some of the well-known theorems in computability theory such as the smn theorem and the first recursion theorem are generalized. Notably, our game-semantic framework distinguishes high-level computational processes that operate directly on mathematical objects such as natural numbers (not on their symbolic representations) and their symbolic implementations that define their ‘computability,’ which sheds new light on the very concept of computation. This work is intended to be a stepping stone toward a new mathematical foundation of computation, intuitionistic logic and constructive mathematics.


Introduction
The present work introduces an intrinsic, non-inductive, non-axiomatic formulation of '(effectively) computable' strategies in game semantics and proves as a main theorem that they are Turing complete. This result leads to a novel mathematical foundation of computation beyond classical computation, e.g., higher-order computation, that distinguishes high-level and low-level computational processes, where the latter defines 'effective computability' of the former.
Convention We shall informally use computational processes and algorithms almost as synonyms of computation, but they put more emphasis on 'processes.'

Search for Turing machines beyond classical computation
Turing machines (TMs) introduced in the classic work [67] by Alan Turing have been widely accepted as giving a reasonable, highly convincing definition of 'effectivity' or '(effective) computability' of (partial) functions on (finite sequences of) natural numbers, which let us call in this paper recursiveness, classical computability or Church-Turing computability, in a mathematically rigorous manner. This is because 'computability' of a function intuitively means the very existence of an algorithm that implements the function's input/output behavior, and TMs are none other than a mathematical formulation of this informal concept.
In mathematics, however, there are various kinds of non-classical computation, where by classical computation we mean what merely implements a function on natural numbers, since there are a variety of mathematical objects other than natural numbers, for which TMs have certain limitations.
As an example of non-classical computation, consider higher-order computation [50], i.e., computation that takes (as an input) or produces (as an output) another computation, which abounds in mathematics, e.g., quantification in mathematical logic, differentiation in analysis or simply an application (f, a) → f (a) of a function f : A → B to an argument a ∈ A. However, TMs cannot capture higher-order computation in a natural or systematic fashion. In fact, although TMs may compute on symbols that encode other TMs, e.g., consider universal TMs [35,48,64], they cannot compute on 'external behavior' of an input computation, which implies that the input is limited to a recursive one (to be encoded); however, it makes perfect sense to consider computation on non-recursive objects such as non-recursive real numbers. For this point, one may argue that oracle TMs [47,64] may treat an input computation as an oracle, a black-box-like computation that does not have to be recursive; however, it is like a function (rather than a computational process) that computes just in a single step, which appears conceptually mysterious and technically ad hoc. (Another approach is to give an input computation as a potentially infinite sequence of symbols on the input tape [69], but it may be criticized in a similar manner.) On the other hand, most of the other models of higher-order computation are, unlike TMs, either syntactic (such as λ-calculi and programming languages [9,50]), inductive and/or axiomatic (such as Kleene's schemata S1-S9 [40,41]) or extrinsic (i.e., reducing to classical computation by encoding whose 'effectivity' is usually left imprecise [16,50]), thus lacking the semantic, direct, intrinsic nature of TMs. Also, unlike classical computability, a confluence between different notions of higher-order computability has been rarely established [50]. For this problem, it would be a key step to establish a TMs-like model of higher-order computation since it may tell us which notion of higher-order computability is a 'correct' one.

Search for mathematics of high-level computational processes
Perhaps more crucially than the limitation for non-classical computation mentioned above, one may argue that TMs are not appropriate as mathematics of computational processes since computational steps of TMs are often too low-level to see what they are supposed to compute. In other words, we need mathematics of high-level computational processes that gives a 'birds-eye-view' of low-level computational processes. 1 Also, what TMs formulate is essentially symbol manipulations; however, the content of computation on mathematical, semantic, non-symbolic objects seems completely independent of its symbolic representation, e.g., consider a process (not a function) to add numbers or to take the union of sets.
Therefore, it would be rather appropriate, at least from the conceptual and the mathematical points of view, to formulate such high-level computational processes in a more abstract, in particular syntax-independent, manner, in order to explain low-level computational processes, and then regard the latter as executable symbolic implementations of the former.

Our research problem: mathematics of computational processes
To summarize, it would be reasonable and meaningful from both of the conceptual and the mathematical viewpoints to develop mathematics of abstract (in particular syntaxindependent), high-level computational processes as well as executable, low-level ones beyond classical computation such that the former is defined to be 'effectively computable' if it is implementable or representable by the latter.
In fact, this (or similar) perspective is nothing new and shared with various prominent researchers; for instance, Robin Milner stated: … we should have achieved a mathematical model of computation, perhaps highly abstract in contrast with the concrete nature of paper and register machines, but such that programming languages are merely executable fragment of the theory … [52] We address this problem in the present paper. However, since there are so many kinds of computation, e.g., parallel, concurrent, probabilistic, non-deterministic and quantum, as the first step, this paper focuses on a certain kind of higher-order, sequential (i.e., at most one computational step may be performed at a time) computation, which is based on (sequential) game semantics 2 introduced below.
An advantage of game semantics is this flexibility: It models a wide range of programming languages by simply varying constraints on strategies [6], which enables one to systematically compare and relate different languages ignoring syntactic details. Also, as full completeness and full abstraction results [15] in the literature have demonstrated, game semantics in general has an appropriate degree of abstraction (and thus it has a good potential to be mathematics of high-level computational processes). Finally, yet another strong point of game semantics is its conceptual naturality: It interprets syntax as 'dynamic interactions' between the participants of games, providing a computational, intensional explanation of syntax in a natural, intuitive (yet mathematically precise) manner. Informally, one can imagine that games provide a high-level description of interactive computation between a TM and an oracle, and therefore, they seem appropriate as an approach to the research problem defined in Sect. 1.3. Note that such an intensional nature stands in sharp contrast to the traditional domain-theoretic denotational semantics [8] which, e.g., cannot capture sequentiality of PCF (but the game models [7,38,55] can).
In the following, let us give a brief, informal introduction to games and strategies (as defined in [6]) in order to sketch the main idea of the present paper.
A game, roughly, is a certain kind of a rooted forest whose branches represent possible 'developments' or (valid) positions of a 'game in the usual sense' (such as chess, poker, etc.). Moves of a game are nodes of the game, where some moves are distinguished and called initial; only initial moves can be the first element (or occurrence) of a position of the game. Plays of a game are increasing sequences , m 1 , m 1 m 2 , . . . of positions of the game, where is the empty sequence. For our purpose, it suffices to focus on rather standard sequential (as opposed to concurrent [2]) and unpolarized (as opposed to polarized [49]) games played by two participants, Player (P), who represents a 'computational agent,' and Opponent (O), who represents a 'computational environment,' in each of which O always starts a play (i.e., unpolarized), and then they alternately and separately (i.e., sequential) perform moves allowed by the rules of the game. Strictly speaking, a position of each game is not just a sequence of moves: Each occurrence m of O's or O-(resp. P's or P-) non-initial move in a position points to a previous occurrence m of P-(resp. O-) move in the position, representing that m is performed specifically as a response to m . A strategy on a game, on the other hand, is what tells P which move (together with a pointer) she should make at each of her turns in the game. Hence, a game semantics _ G of a programming language L interprets a type A of L as a game A G that specifies possible plays between P and O, and a term M : A 3 of L as a strategy M G that describes for P how to play on A G ; an execution of the term M is then modeled as a play of A G in which P follows M G .
Let us consider a simple example. The game N of natural numbers is the following rooted tree (which is infinite in width): in which a play starts with O's question q ('What is your number?') and ends with P's answer n ∈ N ('My number is n!'), where N is the set of all natural numbers, and n points to q (though this pointer is omitted in the diagram). A strategy 10 on N , for instance, that corresponds to 10 ∈ N can be represented by the map q → 10 equipped with a pointer from 10 to q (though it is the only choice). In the following, the pointers of most strategies are obvious, and thus, we often omit them.
There is a construction ⊗ on games, called tensor (product). Conceptually, a position s of the tensor A ⊗ B of games A and B is an interleaving mixture of a position t of A and a position u of B developed 'in parallel without communication'; more specifically, t (resp. u) is the subsequence of s consisting of moves of A (resp. B) such that the change of AB-parity (i.e., the switch between t and u) in s must be made by O. The pointers in s are inherited from those in t and u in the obvious manner; this point holds also for other constructions on games and strategies in the rest of the introduction, and thus, we shall not mention it again. For instance, a maximal position of the tensor N ⊗ N is either of the following forms 4 : [1] N [0] ⊗ N [1] q [0] q [1] n [0] m [1] q [1] q [0] m [1] n [0] where n, m ∈ N, and (_) [i] (i = 0, 1) are (arbitrary, unspecified) 'tags' to distinguish the two copies of N (but we often omit them if it does not bring confusion), and the arrows represent pointers (n.b., they are distinct from edges of the game). Next, a fundamental construction ! on games, called exponential, is basically the countably infinite iteration of ⊗, i.e., !A df .
= A ⊗ A ⊗ . . . for each game A, where the 'tag' for each copy of A is typically given as (_, i), where i ∈ N. Another central construction , called linear implication, captures the notion of linear functions, i.e., functions that consume exactly one input to produce an output. A position of the linear implication A B from A to B is almost like a position of the tensor A ⊗ B except the following three points: 1. The first occurrence of the position must be a move of B; 2. A change of AB-parity in the position must be made by P; 3. Each occurrence of an initial move (called an initial occurrence) of A points to an initial occurrence of B.
Let us remark here that the following play, which corresponds to a constant linear function that maps x → m for all x ∈ N, is also possible: , q [1] , q [1] m [1] . Thus, strictly speaking, A B is the game of affine functions from A to B, but we follow the standard convention to call linear implication. Another construction & on games, called product, is similar to yet simpler than tensor: A position s of the product A&B of A and B is either a position t [0] of A [0] or a position u [1] of B [1] . It is the product in the category G of games and strategies, e.g., there is the pairing σ , τ : !C A&B of given strategies σ : !C A and τ : !C B that plays as σ (resp. τ ) if O initiates a play by a move of A (resp. B). Clearly, we may generalize product and pairing to n-ary ones for any n ∈ N.
These four constructions ⊗, !, and & come from the corresponding ones in linear logic [5,27]. Thus, in particular, the usual implication (or the function space) ⇒ is recovered by Girard translation [27]: Girard translation makes explicit the point that some functions need to refer to an input more than once to produce an output, i.e., there are nonlinear functions. For instance, consider the game (N ⇒ N ) ⇒ N of higher-order functions, in which the following position is possible: where n, n , m, m , l, i, i , j, j ∈ N and j = j , which can be read as follows: 1. O's question q for an output ('What is your output?'); 2. P's question (q, j) for an input function ('Wait, your first output please!'); 3. O's question ((q, i), j) for an input ('What is your first input then?'); 4. P's answer, say, ((n, i), j), to ((q, i), j) ('Here is my first input n.'); 5. O's answer, say, (m, j), to (q, j) ('OK, then here is my first output m.'); 6. P's question (q, j ) for an input function ('Your second output please!'); 7. O's question ((q, i ), j ) for an input ('What is your second input then?'); 8. P's answer, say, ((n , i ), j ), to ((q, i ), j ) ('Here is my second input n .'); 9. O's answer, say, (m , j ), to (q, j ) ('OK, then here is my second output m .'); 10. P's answer, say, l, to q ('Alright, my output is then l.').
In this play, P asks O twice about an input strategy N ⇒ N . Clearly, such a play is not possible on the linear implication (N N ) N or (N ⇒ N ) N . The strategy pazo : (N ⇒ N ) ⇒ N that computes the sum f (0) + f (1) for a given function f : N ⇒ N, for instance, plays as follows: where j = 0 and j = 1 are arbitrarily chosen, i.e., any j, j ∈ N with j = j work. Finally, let us point out that any strategy φ on the implication !A B induces its promotion φ † : !A !B such that if φ plays, for instance, as where _, _ : N × N ∼ → N is an arbitrarily fixed bijection, i.e., φ † plays as φ for each thread in a position of !A !B that corresponds to a position of !A B.

Toward a game-semantic model of computation
As seen in the examples given above, games and strategies capture higher-order computation in an abstract, conceptually natural fashion, where O plays the role of an oracle as part of the formalization. Note also that P computes on 'external behavior' of O, and thus O's computation does not have to be recursive at all. Thus, one may expect that games and strategies would be appropriate as mathematics of high-level computational processes, solving the research problem of Sect. 1.3. However, conventional games and strategies have never been formulated as a mathematical model of computation (in the sense of TMs); rather, the primary focus of the field has been full abstraction [8,15], i.e., to characterize observational equivalences in syntax. In other words, game semantics has not been concerned that much with step-by-step processes in computation or their 'effective computability,' and it has been identifying programs with the same value [32,70].
For instance, strategies on the game N ⇒ N typically play by q.(q, i).(n, i).m, where n, m, i ∈ N, as described above, and so they are essentially functions that map n → m; in particular, it is not formulated at all how they calculate the fourth move m from the third one (n, i). 5 As a consequence, 'effective computability' in game semantics has been extrinsic: A strategy has been defined to be 'effective' or recursive if it is representable by a partial recursive function [7,20,38].
This situation is in a sense frustrating since games and strategies seem to have a good potential to give a semantic, intrinsic (i.e., without recourse to an established model of computation), non-axiomatic, non-inductive formulation of higher-order computation, but they have not taken advantage of this potential.
For the potential, we have decided to employ games and strategies as our basic mathematical framework and extend them to give mathematics of computational processes in the sense described in Sect. 1.3. For this aim, we shall first refine the category G of games and strategies in such a way that accommodates step-by-step processes in computation, and then define their 'effectivity' in terms of their atomic computational steps. Fortunately, there is already the bicategory DG of dynamic games and strategies [71], which addresses the first point.

Dynamic games and strategies
In the literature, there are several game models [13,18,31,56] that exhibit step-by-step processes in computation to serve as a tool for program verification and analysis (the work [17,23] may be called 'intensional game semantics,' but they rather keep track of costs in computation, not computational steps themselves). However, these variants of games and strategies are just conventional ones, and consequently, such step-by-step processes have no official status in their categories.
The problem lies in the point that in conventional game semantics composition of strategies is executed as parallel composition plus hiding [3], where hiding is the matter. Let us illustrate this point by a simple, informal example as follows. Consider again strategies succ and double, but this time they are adjusted to the game N ⇒ N . Their computations can be described by the following diagrams: succ N [1] !N [2] double N [3] q [1] q [3] (q, 0) [0] (q, 0) [2] (m, 0) [0] (n, 0) [2] m + 1 [1] 2 · n [3] where the 'tag' (_, 0) on moves of the domain !N has been arbitrarily chosen (i.e., any i ∈ N instead of 0 works). The composition double • succ df .
Next, hiding means to hide or delete all moves with the square boxes from the play, resulting in the strategy for the function n → 2 · (n + 1) as expected: [3] q [3] (q, 0, 0 ) [0] (n, 0, 0 ) [0] 2 · (n + 1) [3] By the hiding operation, the resulting play is a legal one of the game N ⇒ N , but let us point out that the intermediate occurrences of moves (with the square boxes), representing step-by-step processes in computation, are deleted by the operation. Nevertheless, the present author and Samson Abramsky have introduced a novel, dynamic variant of games and strategies that systematically model dynamics and intensionality of computation, and also studied their algebraic structures [71]. In contrast to the previous work mentioned above, dynamic strategies themselves embody step-by-step processes in computation by retaining intermediate occurrences of moves, and composition of them is parallel composition without hiding. In addition, the categorical structure of existing game semantics is not lost but rather refined by the cartesian closed bicategory [57] DG of dynamic games and strategies, forming a categorical 'universe' of high-level computational processes.

Viable strategies
Now, the remaining problem is to define 'effective' dynamic strategies in an intrinsic (i.e., solely in terms of games and strategies), non-inductive, non-axiomatic manner. Of course, we need to provide a convincing argument that justifies their 'effectivity' (though such an argument can never be mathematically precise) as in the case of TMs. Moreover, to obtain a powerful model of computation, they should be at least Turing complete, i.e., they ought to subsume all classically computable partial functions. This sets up, in addition to the conceptual quest so far, an intriguing mathematical question in its own right: Is there any intrinsic, non-inductive, non-axiomatic notion of 'effectivity' of dynamic strategies that is Turing complete?
Not surprisingly, perhaps, this problem has turned out to be challenging, and the main technical achievement of the present paper is to give a positive answer to it.
As already mentioned, our solution is to give low-level computational processes (which are clearly 'executable') in order to define 'effectivity' of dynamic strategies (or high-level computational processes). This is achieved roughly as follows.
Remark The concepts introduced below make sense for conventional (i.e., non-dynamic) games and strategies too, but they do not give rise to a Turing complete model of computation for composition of conventional strategies does not preserve our notion of 'effectivity' or viability as we shall see.
First, we give, by a fixed alphabet, a concrete formalization of 'tags' for disjoint union of sets of moves for constructions on games in order to rigorously formulate 'effectivity' of strategies. As we see in Sect. 1.4, a finite number of 'tags' suffice for most constructions on games, but it is not the case for exponential !. Then, we formalize 'tags' for exponential by an unary representation i df .
= . . . i of natural numbers i ∈ N (extended by a symbolic implementation of a recursive bijection N * ∼ → N [16], but here we omit the extension for simplicity) and employ, instead of the game N , the lazy variant N of natural number game whose maximal positions are either of the following forms: where the number n of yes in the position ranges over all natural numbers, which represents the number intended by P. In this way, N gives an unary representation 7 of natural numbers. Note that the initial questionq must be distinguished from the non-initial one q for a technical reason, which will be clarified in Sect. 2. This sets up a finitary representation of game-semantic computation on natural numbers. Next, as we shall see, dynamic strategies modeling PCF only need to refer to at most three moves in the history of previous moves which may be 'effectively' identified by pointers (specifically the last three moves in the P-view [6,38]; see Definition 16). Thus, it may seem at first glance that finitary dynamic strategies in the following sense suffice: A strategy is finitary if its representation by a partial function [38,51] that assigns the next P-move to previous moves, called its table, is finite. However, it is not the case: Finitary strategies cannot handle unboundedly many manipulations of 'tags' for exponential (more precisely, manipulations such that the length of input or output 'tags' is unbounded), but such manipulations seem to be necessary for Turing completeness, e.g., a strategy that models primitive recursion or minimization has to interact with input strategies unboundedly many times, and thus, it must handle unboundedly many 'tags. ' Then, the main idea of our solution is to define a strategy to be viable if its table is 'describable' by a finitary strategy. To state it more precisely, let us note that there is the terminal game T which has only the empty sequence as a position, and each game G is identical up to 'tags' to the implication T ⇒ G. Hence, we may regard strategies σ : G as the one on the implication T ⇒ G up to 'tags,' and vice versa; we shall not take the trouble of distinguishing the two. Also, we define for each move m = [m ] e of a game G, where e = e 1 .e 2 . . . e k is a unary representation of the 'tag' for exponential on m, the strategy m on a suitable game G(M G ) that plays asq.m .q.e 1 .q.e 2 . . . q.e k .q. , where each non-initial element points to the last element, and the font difference between the moves is just for clarity. In this manner, the strategy m : G(M G ) encodes the move m. Then, viability of strategies is given more precisely as follows: A strategy σ : G is defined to be viable if its partial function representation (m 3 , m 2 , m 1 ) → m, where m 1 , m 2 and m 3 are the last, the second last and the third last moves of the current P-view, respectively, is 'implementable' by a finitary strategy A(σ ) : For instance, consider the successor and the doubling strategies modified for the lazy natural number game N , whose plays (on a nonzero input) are as in Fig. 1. Roughly, succ copies a given input on !N [0] and repeats it as an output on N [1] , but it adds one more [yes [1] ] to N [1] before [no [1] ]; similarly, double copies an input and repeats it as an output, but it doubles the number of [yes [1] ]'s in the output. It is easy to see that for computing the next P-move (with a pointer) at an odd-length position s = m 1 m 2 . . . m 2i+1 the strategies only need to refer to at most the last O-move m 2i+1 , the P-move m 2j pointed by m 2i+1 and the O-move m 2j−1 (they are the last three moves in the P-view of s), e.g., succ : ( , , [q [1] ]) → [q [0] ], ([q [1] ], [q [0] ], [yes [0] ]) → [yes [1] ], and so on, where denotes 'no move.' Note that these strategies do not need unboundedly many manipulations of 'tags'; they are in fact finitary (it is easy to construct finite tables for them, each of which consists of a finite number of quadruples of moves of the form (m 3 , m 2 , m 1 ) → m).
On the other hand, consider the strategy min : N ⇒ N that implements the minimization (N N) N that maps a given partial function f : N N to the least n ∈ N such that f (n) = 0 if it exists. For simplicity, the domain of min is !N , not N ⇒ N ; min informs an input computation of an input number n ∈ N by the 'tag' [_] n for exponential. As described in Fig. 2, min simply investigates if the input computation gives back zero just by checking the first digit ([yes [0] ] or [no [0] ]) and adds [yes [1] ] to the output if the input computation gives back nonzero (i.e., if the first digit is [yes [0] ]). Note that min only needs to refer to at most the last three moves of each odd-length position (n.b., in this case, they are the last three moves of the P-view of the position as well). The point here is that yes [0] ] [yes [1] [ ] yes [1] ] [q [1] [ ] q [1] ] [ yes [1] ] [yes [0] ] [ q [1] ] [yes [1] ] [ q] [0] ] [q [1] ] [ [ yes [1] ] . . .
[q [1] ] [q [0] ] n [yes [0] ] n [yes [1] ] [q [1] ] [q [0] ] n+1 [no [0] ] n+1 [no [1] ] !N !N Fig. 2 The minimization strategy min : N ⇒ N how the infinitary manipulation of 'tags' by min is reduced to a finitary computation by A(min) . This example illustrates why we need viable (not only finitary) dynamic strategies for Turing completeness, where recall that minimization (in the general form) or an equivalent construction is vital to construct all partial recursive functions [16]. Also, it should be intuitively clear now why we have to employ composition of strategies without hiding: An instruction strategy for the composition of strategies φ : A ⇒ B and ψ : B ⇒ C without hiding can be obtained simply as the disjoint union of instruction strategies for φ † and ψ (see the proof of Theorem 76 for the details), but it is not possible for composition with hiding. (In fact, there is no obvious way to construct an instruction strategy for the composition of φ and ψ with hiding.) We advocate that viability of strategies gives a reasonable notion of 'effective computability' as finitary strategies are clearly 'effective,' and so their descriptions or instruction strategies can be 'effectively read off' by P. Note also that viability is defined solely in terms of games and strategies without any axiom or induction. Moreover, viability is at least as strong as Church-Turing computability: As the main results of the present work, we show that dynamic strategies definable by PCF are all viable (Theorem 81), and therefore, they are Turing complete in particular (Corollary 82). Also, viable dynamic strategies solve the problem defined in Sect. 1.3 in the following sense. First, as we have seen via examples, games and strategies give an abstract, syntaxindependent formulation of high-level computational processes, e.g., the lazy natural number game N defines natural numbers (not their symbolic representation) as 'counting processes' in an abstract, syntax-independent fashion, beyond classical computation, e.g., higher-order computation. Moreover, an instruction strategy for a viable dynamic strategy describes a low-level computational process that implements the dynamic strategy. In this manner, we have obtained a single mathematical framework for both high-level and low-level computational processes as well as 'effective computability' of the former in terms of the latter.

Our contribution and related work
Our main technical achievement is to define an intrinsic, non-inductive, non-axiomatic notion of 'effectivity' of strategies in game semantics, namely viable dynamic strategies, and show that they are Turing complete (Corollary 82). We have also shown the converse (though it is not that surprising): The input/output behavior of each viable dynamic strategy computing on natural numbers coincides with a partial recursive function (Theorem 85). This result immediately implies a universality result [7,15,58] as well: Every viable dynamic strategy on a dynamic game interpreting a type of PCF is (up to intrinsic equivalence) the denotation of a term of PCF (Corollary 89). In addition, some of the wellknown theorems in computability theory [16,60] such as the smn theorem and the first recursion theorem are generalized to non-classical computation (Corollaries 83 and 84). We hope that these technical results would convince the reader that viability of dynamic strategies is a natural, reasonable generalization of Church-Turing computability.
Another, more conceptual contribution of the present work is to establish a single mathematical framework for both high-level and low-level computational processes, where the former defines what computation does, while the latter describes how to execute the former. In comparison with existing mathematical models of computation, our gamesemantic approach has some novel features. First, in comparison with computation by TMs or programming languages, plays of games are a more abstract concept; in particular they are not necessarily symbol manipulations, which is why they are suitable for abstract, high-level computational processes. Next, computation in a game proceeds as an interaction between P and O, which may be seen as a generalization of computation by TMs in which just one interaction occurs (i.e., O gives an input on the infinite tape, and then P returns an output on the tape); this in particular means that O's computation does not have to be recursive, and it is part of the formalization, which is why game semantics in general captures higher-order computation in a natural, systematic manner. The present work inherits this interactive nature of game semantics. Last but not least, games are a semantic counterpart of types, where note that types do not a priori exist in TMs, and types in programming languages are syntactic entities. Hence, our approach provides a deeper clarification of types in the context of theory of computation.
Moreover, by exploiting the flexibility of game semantics, our approach would be applicable to a wide range of computation though it is left as future work. Also, game semantics has interpreted various logics as well [1,5,37,72], and so it would be possible to employ our framework for a realizability interpretation of constructive logic [65,68], for which viable dynamic strategies would be more suitable as realizers than existing strategies such as [12] since the former contains more 'computational contents' and makes more sense as a model of computation than the latter. Furthermore, the game models [1,72] interpret Martin-Löf type theory, one of the most prominent foundations of constructive mathematics, and thus our framework would provide a mathematical, syntax-independent formalization of constructive mathematics too. 8 Of course, we need to work out details for these developments, which is out of the scope of the present paper, but it is in principle clear how to apply our framework to existing game semantics. In this sense, the present work would serve as a stepping stone toward these extensions.
In the literature, there have been several attempts to provide a mathematical foundation of computation beyond classical or symbolic ones. We do not claim at all our game-semantic approach is best or canonical in comparison with the previous work; however, our approach certainly has some advantages. For instance, Robin Gandy proposed in the famous paper [22] a notion of 'mechanical devices,' now known as Gandy machines (GMs), which appear more general than TMs, but showed that TMs are actually as powerful as GMs. However, since GMs are an axiomatic approach to define a general class of 'mechanical devices' that are 'effectively executable,' they do not give a distinction between high-level and low-level computational processes, where GMs formulate the latter. More recent abstract state machines (ASMs) [33] introduced by Yuri Gurevich employ a similar idea to that of GMs for 'effectivity,' namely to require an upper bound of elements that may change in a single step of computation, utilizing structures in the sense of mathematical logic [63]. Notably, ASMs define a very general notion of computation, namely computation as structure transition. However, it seems that this framework is in some sense too general; for instance, it is possible that an ASM computes a real number in a single step, but then its 'effectivity' is questionable. In general, an appropriate notion of 'effective computability' of ASMs has been missing. Also, the way of computing a function by an ASM is to update input/output pairs of the function in the element-wise fashion, but it does not seem to be a common or natural processes in practice. In addition, Yiannis Moschovakis considered a mathematical foundation of algorithms [54] in which, similarly to us, he proposed that algorithms and their 'implementations' should be distinguished, where by algorithms he refers to what we call high-level computational processes. However, his framework, called recursors, is also based on structures, and his notion of algorithms is relative to atomic operations given in each structure; thus, it does not give a foundational analysis on the notion of 'effective computability.' Therefore, although the previous work captures broader notions of computation than the present work, our approach has the advantage of achieving both of the distinction between high-level and low-level computational processes, and the primitive, intrinsic notion of 'effective computability.' Also, the interactive, typed nature of game semantics stands in sharp contrast to the previous work as well.
At this point, we need to mention computability logic [39] developed by Giorgi Japaridze since his idea is similar to ours; he defines 'effective computability' via computing machines playing in games. Nevertheless, there are notable differences between computability logic and the present work. First, computing machines in computability logic are a variant of TMs, and thus they are less novel as a model of computation than our approach; in fact, the definition of 'effective computability' in computability logic can be seen more or less as a consequence of just spelling out the standard notion of recursive strategies [7,20,38]. Next, our framework inherits the categorical structure of existing game semantics (see [71] for this point), providing a compositional formulation of logic and computation, i.e., a compound proof or program is constructed from its components, while there has been no known categorical structure of computability logic. Nevertheless, it would be interesting to adopt his TMs-based approach in our framework and compare the resulting computational power with that of the present work.
Finally, let us mention some of the precursors of game semantics. To clarify the notion of higher-order computability, Stephen Cole Kleene considered a model of higher-order computation based on dialogues between computational oracles in a series of papers [42][43][44], which can be seen as the first attempt to define a mathematical notion of algorithms in a higher-order setting [50]. Moreover, Gandy and his student Giovanni Pani refined these works by Kleene to obtain a model of PCF that satisfies universality though this work was not published. These previous papers are direct ancestors of game semantics (in particular the so-called HO-games [38] by Martin Hyland and Luke Ong). As another line of research (motivated by the full abstraction problem for PCF [58]), Pierre-Louis Curien and Gerard Berry conceived of sequential algorithms [10] which was the first attempt to go beyond (extensional) functions to capture sequentiality of PCF. Sequential algorithms preceded and became highly influential to the development of game semantics; in fact, sequential algorithms are presented in the style of game semantics in [50], and it is shown in [14] that the oracle computation developed by Kleene can be represented by sequential algorithms (though the converse does not hold). Nevertheless, a point we would like to emphasize here is that neither of the previous attempts defines 'effective computability' in a similar manner to the present work; our approach has an advantage in its intrinsic, non-inductive, non-axiomatic nature.

Structure of the paper
The rest of the paper proceeds roughly as follows. This introduction ends with fixing some notation. Then, recalling dynamic games and strategies in Sect. 2, we define viability of strategies and establish, as the main theorem, the fact that viable dynamic strategies may interpret all terms of PCF in Sect. 3, proving their Turing completeness as a corollary. Finally, we draw a conclusion and propose future work in Sect. 4.
Notation We use the following notation throughout the paper: • We use bold letters s, t, u, v, etc. for sequences, in particular for the empty sequence, and letters a, b, c, d, m, n, x, y, z, etc. for elements of sequences; • We often abbreviate a finite sequence s = ( denotes the length (i.e., the number of elements) of s, and write s(i), where i ∈ {1, 2, . . . , |s|}, as another notation for x i ; • A concatenation of sequences is represented by the juxtaposition of them, but we often write as, tb, ucv for (a)s, t(b), u(c)v, etc., and also write s.t for st; • We define s n df .
= ss · · · s n for a sequence s and a natural number n ∈ N; • We write Even(s) (resp. Odd(s)) iff s is of even-length (resp. odd-length); • We define S P df . = {s ∈ S | P(s)} for a set S of sequences and P ∈ {Even, Odd}; • s t means s is a prefix of t, i.e., t = s.u for some sequence u, and given a set S of sequences, we define Pref(S) df .
= {s | ∃t ∈ S.s t }; • For a poset P and a subset S ⊆ P, Sup(S) denotes the supremum of S; • For a function f : A → B and a subset S ⊆ A, we define f S : S → B to be the restriction of f to S, and f * : for all a 1 a 2 . . . a n ∈ A * ; • Given sets X 1 , X 2 , . . . , X n , and an index i ∈ {1, 2, . . . , n}, we write π i (or π we write x ↓ if an element x is defined, and x ↑ otherwise.

Preliminary: games and strategies
Our games and strategies are essentially the 'dynamic refinement' of McCusker's variant [6,51], 9 which has been proposed under the name of dynamic games and strategies by the present author and Abramsky in [71] to capture dynamics (or rewriting) and intensionality (or algorithms) of computation by mathematical, particularly syntax-independent, concepts. As already explained, we have chosen this variant since, in contrast to conventional games and strategies, dynamic games and strategies capture step-by-step processes in computation, which is essential for a TMs-like model of computation. However, we need some modifications of dynamic games and strategies. First, although disjoint union of sets of moves (for constructions on games) is usually treated informally for brevity, we need to adopt a particular formalization of 'tags' for the disjoint union because we are concerned with 'effective computability' of strategies, and thus, we must show that manipulations of 'tags' are all 'effectively executable' by strategies. In particular, we have to employ exponential ! in which different 'rounds' or threads are distinguished by such 'effective tags. ' In addition, we slightly refine the original definition of dynamic games by requiring that an intermediate occurrence of an O-move in a position of a dynamic game must be a mere copy of the last occurrence of a P-move, which reflects the example of composition without hiding in the introduction. This modification is due to our computability-theoretic motivation: Intermediate occurrences of moves are 'invisible' to O (as in the example of composition without hiding), and therefore, P has to 'effectively' compute intermediate occurrences of O-moves too (though this point does not matter in [71]); note that it is clearly 'effective' to just copy and repeat a move. Also, it conceptually makes sense as well: Intermediate occurrences of O-moves are just copies or dummies of those of P-moves, and thus what happens in the intermediate part of each play is essentially P's calculation only. Technically, this is achieved by introducing dummy internal O-moves (Definition 8) and strengthening the axiom DP2 (Definition 18). Let us remark, however, that this refinement is technically trivial, and it is not our main contribution.
This section presents the resulting variant of games and strategies. Fixing an implementation of 'tags' in Sect. 2.1 as a preparation, we recall (the slightly modified) dynamic games and strategies in Sects. 2.2 and 2.3, respectively. To make this paper essentially self-contained, we shall explain motivations and intuitions behind the definitions.

On 'tags' for disjoint union of sets
Let us begin with fixing 'tags' for disjoint union of sets that can be 'effectively' manipulated. We first define outer tags (Definition 3) for exponential (Definition 33), and then inner tags (Definition 5) for other constructions on games.
Definition 1 (Effective tags) An effective tag is a finite sequence over the two-element set = { ,h}, where andh are arbitrarily fixed elements such that =h.

Definition 2 (Decoding and encoding)
The decoding function de : * → N * and the encoding function en : N * → * are defined, respectively, by: Clearly, the functions de : * N * : en are mutually inverses (n.b., they both map the empty sequence to itself). In fact, each effective tag γ ∈ * is intended to be a binary representation of the finite sequence de(γ) ∈ N * of natural numbers. However, effective tags are not sufficient for our purpose: For nested exponentials occurring in promotion (Definition 57) and fixed-point strategies (Example 75), we need to 'effectively' associate a natural number to each pair of natural numbers in an 'effectively' invertible manner. Of course it is possible as there is a recursive bijection N × N ∼ → N whose inverse is recursive too, which is an elementary fact in computability theory [16,60], but we cannot rely on it for we are aiming at developing an autonomous foundation of 'effective computability. ' On the other hand, such a bijection is necessary only for manipulating effective tags, and so we would like to avoid an involved mechanism to achieve it. Then, our solution for this problem is to simply introduce elements to denote the bijection: ≡ γ | e 1h e 2 | e , where γ ranges over effective tags.
Notation Let T denote the set of all outer tags.
Of course, we lose the bijectivity between * and N * for outer tags (e.g., if ede( e ) = (i), then ede( i ) = (i), but e = i ), but in return, we may 'effectively execute' the bijection ℘ : N * ∼ → N by just inserting the elements and . 10 We shall utilize outer tags for exponential !; see Definition 33.
On the other hand, for 'tags' on moves for other constructions on games, i.e., (_) [i] in the introduction, let us employ just four distinguished elements: We shall focus on games whose moves are all tagged elements: Convention We often abbreviate an inner element m t 1 t 2 ...t k as m if the inner tag t 1 t 2 . . . t k is not very important.

Games
As already stated, our games are (slightly modified) dynamic games introduced in [71]. The main idea of dynamic games is to introduce, in McCusker's games [6,51], a distinction between internal and external moves, where internal moves constitute internal communication between strategies (i.e., moves with square boxes in the introduction), and they are to be a posteriori hidden by the hiding operation, in order to capture intensionality and dynamics of computation by internal moves and the hiding operation, respectively. Conceptually, internal moves are 'invisible' to O as they represent how P 'internally' calculates the next external P-move (i.e., step-by-step processes in computation). In addition, unlike [71], we restrict internal O-moves to dummies of internal P-moves (Definition 8) for the computability-theoretic motivation already mentioned at the beginning of Sect. 2.
We first review (the slightly modified) dynamic games in the present section; see [71] for the details, and [3,6,37] for a general introduction to game semantics.
Convention To distinguish our 'dynamic concepts' from conventional ones [6,51], we add the word static in front of the latter, e.g., static arenas, static games, etc.

Arenas and legal positions
Similarly to McCusker's games, dynamic games are based on two preliminary concepts: (dynamic) arenas and legal positions. An arena defines the basic components of a game, which in turn induces its legal positions that specify the basic rules of the game. Let us begin with recalling these two concepts.

Definition 8 (Dynamic arenas [71]) A dynamic arena is a quadruple
• M G is a set of tagged elements, called moves, such that: (M) the set π 1 (M G ) of all inner elements of G is finite; Q and A are arbitrarily fixed, pairwise distinct symbols, called the labels, that satisfies: where is an arbitrarily fixed symbol such that / ∈ M G , called the enabling relation, that satisfies: for the set of all initial (resp. internal, external) moves of a dynamic arena G.
A dynamic arena is a static arena defined in [6], equipped with another labeling λ N G on moves and dummies of internal P-moves, satisfying additional axioms about them. From the opposite angle, dynamic arenas are a generalization of static arenas: A static arena is equivalent to a dynamic arena whose moves are all external.
Recall that a static arena A determines possible moves of a game, each of which is O's/P's question/answer, and specifies which move n can be performed for each move m by the relation m A n (and A m means that m can initiate a play). Its axioms are E1, E2 and E3 (excluding the conditions on λ N A ): • E1 sets the convention that an initial move must be O's question, and an initial move cannot be performed for a previous move; • E2 states that an answer must be performed for a question; • E3 mentions that an O-move must be performed for a P-move, and vice versa.
Then, as an additional structure for dynamic arenas G, the work [71] employs all natural numbers for λ N G , not only the internal/external (I/E)-parity, to define a step-by-step execution of the hiding operation H: The operation H deletes all internal moves m such that λ N G (m), called the priority order of m (since it indicates the priority order of m with respect to the execution of H), is 1 and decreases the priority orders of the remaining internal moves by 1. 11 In addition, unlike [71], we have introduced the additional structure of dummy functions for the computability-theoretic motivation mentioned at the beginning of Sect. 2. The idea is that each internal O-move m ∈ M OInt Note that the additional axioms for dynamic areas are intuitively natural: • M requires the set π 1 (M G ) to be finite so that each move is distinguishable, which is not required in [71] yet necessary to define 'effectivity' in the present work; • L requires the least upper bound μ(G) to be finite as it is conceptually natural and technically necessary for concatenation ‡ of games (Definition 36); internal moves, and thus, he cannot initiate a play with an internal move; • E2 additionally requires the priority orders between a 'QA pair' to be the same since otherwise an output of the hiding operation may not be well defined; • E4 states that only P can perform a move for a previous move if they have different priority orders because internal moves are 'invisible' to O (as we shall see, if λ N G (m 1 ) = k 1 < k 2 = λ N G (m 2 ), then after the k 1 -many iteration of the hiding operation, m 1 and m 2 become external and internal, respectively, i.e., the I/E-parity of moves is relative, which is why E4 is not only concerned with I/E-parity but more fine-grained priority orders); • D requires that each internal P-move p ∈ M PInt G and its dummy G (p) ∈ M OInt G may differ only in their inner tags since the latter is the dummy of the former (n.b., it reflects the informal example in the introduction), and G (p) is 'effectively' obtainable from p by a finitary calculation δ G on inner tags.
Convention From now on, arenas refer to dynamic arenas by default.
As explained previously, an interaction between P and O in a game is represented by a finite sequence of moves that satisfies certain axioms (under the name of (valid) positions; see Definition 18). Strictly speaking, however, we equip such sequences with an additional structure, called justifiers or pointers, to distinguish similar yet different computational processes (see, e.g., [6] for this point): 11 Although the main focus of the work [71] is to capture the small-step operational semantics of a programming language by such a step-by-step hiding operation H, such fine-grained steps do not play a main role in the present work. Nevertheless, we keep the structure λ N G as it makes sense for a model of computation to be equipped with the step-by-step hiding operation, and it would be interesting as future work to consider 'effective computability' of the hiding operation. Definition 9 (Occurrences of moves) Given a finite sequence s ∈ M * G of moves of an arena G, an occurrence (of a move) in s is a pair (s(i), i) such that i ∈ {1, 2, . . . , |s|}. More specifically, we call the pair (s(i), i) an initial occurrence (resp. a non-initial occurrence) in s if G s(i) (resp. otherwise).
Remark We have been so far casual about the distinction between moves and occurrences, but we shall be more precise from now on.
Definition 10 (J-sequences [6,38] , or there is a (necessarily unique) pointer from the former to the latter.
Notation We write J G for the set of all j-sequences of an arena G. Convention By abuse of notation, we usually keep the pointer structure J s of each jsequence s = (s, J s ) implicit and often abbreviate occurrences (s(i), i) in s as s(i). Thus, s = t ∈ J G means s = t and J s = J t . Moreover, we usually write J s (s(i)) = s(j) for J s (i) = j. This convention is mathematically imprecise, but it is very convenient in practice, and it does not bring any serious confusion (in fact, it has been standard in the literature of game semantics).
The idea is that each non-initial occurrence in a j-sequence must be performed for a specific previous occurrence, viz. its justifier. Since the present paper is not concerned with a faithful interpretation of programs, one may wonder if justifiers would play any important role in the rest of the paper; however, they do in a novel manner: They allow P to 'effectively' collect, from the history of previous occurrences, a bounded number of necessary ones, as we shall see in Sect. 3.1.
Note that the first element m of each non-empty j-sequence ms ∈ J G must be initial; we particularly call m the opening occurrence of ms. Clearly, an opening occurrence must be an initial occurrence, but not necessarily vice versa.
Let us now consider justifiers, j-sequences and arenas from the 'external viewpoint' (Definitions 12, 13 and 14): Definition 12 (External justifiers [71]) Let G be an arena, s ∈ J G and d ∈ N ∪ {ω}. Each non-initial occurrence n in s has a unique sequence of justifiers mm k . . .
The occurrence m is called the d-external justifier of n in s, and written J d s (n).
Note that d-external justifiers are a simple generalization of justifiers: 0-external justifiers coincide with justifiers. d-external justifiers are intended to be justifiers after the d-times iteration of the hiding operation H, as we shall see shortly.
Definition 13 (External j-subsequences [71]) Given an arena G, s ∈ J G and d ∈ N ∪ {ω}, the d-external justified (j-) subsequence H d G (s) of s is obtained from s by deleting occurrences of internal moves m such that 0 < λ N G (m) d and equipping the resulting Remark It should be clear how to reformulate Definitions 11, 12 and 13 more formally, following Definitions 9 and 10.
Definition 14 (External arenas [71]) Let G be an arena, and assume Proof We need to consider the additional structure of dummy functions; everything else has been proved in [71]. Let G be an arena, and Finally, the axiom D on H d (G) clearly follows from that on G , completing the proof.
Convention Thanks to Lemma 15, we henceforth regard the i-hiding operations H i and H i G as the i-times iteration of the 1-hiding operations H 1 and H 1 G , respectively, for all i ∈ N. For this reason, we write H and H G for H 1 and H 1 G , respectively, and call them the hiding operations (on arenas and j-sequences, respectively).
Next, let us recall the notion of 'relevant part' of previous moves, called views: Definition 16 (Views [6,38]) Given a j-sequence s of an arena G, the Player (P-) view s G and the Opponent (O-) view s G (we often omit the subscript G) are given by the following induction on |s|: where the justifiers of the remaining non-initial occurrences in s (resp. s ) are unchanged if they occur in s (resp. s ), and undefined otherwise. A view is a P-or O-view.
The idea behind Definition 16 is as follows. For a j-sequence tm of an arena G such that m is a P-move (resp. an O-move), the P-view t (resp. the O-view t ) is intended to be the currently 'relevant part' of t for P (resp. O). That is, P (resp. O) is concerned only with the last O-move (resp. P-move), its justifier and that justifier's P-view (resp. O-view), which then recursively proceeds.
As explained in [6], strategies (Definition 42) that model computation without state refer only to P-views, not entire histories of previous occurrences, as inputs; they are called innocent strategies (Definition 46). In this sense, innocence captures state-freeness of strategies. In this paper, however, P-views play a different yet fundamental role for our notion of 'effective computability' in Sect. 3.1.
We are now ready to define: Definition 17 (Dynamic legal positions [71]) A dynamic legal position of an arena G is a j-sequence s ∈ J G that satisfies: Notation We write L G for the set of all dynamic legal positions of an arena G.
Recall that a static legal position defined in [6] is a j-sequence that satisfies alternation and visibility, i.e., generalized visibility only for d = 0 [6,38,51], which is technically to guarantee that the P-and the O-views of a j-sequence are again j-sequences and conceptually to ensure that the justifier of each non-initial occurrence belongs to the 'relevant part' of the history of previous occurrences.
Static legal positions specify the basic rules of a static game: Every (valid) position of the game must be a static legal position of the underlying arena [6]: • In a position of the game, O always performs the first move by a question, and then P and O alternately play (by alternation), in which every non-initial occurrence is performed for a specific previous occurrence (by justification); • The justifier of each non-initial occurrence in a position belongs to the 'relevant part' of the previous occurrences in the position (by visibility).
Similarly, dynamic legal positions are to specify the basic rules of a dynamic game (Definition 18). They are static legal positions that satisfy additional axioms: • Generalized visibility is a generalization of visibility; it requires that visibility holds after any iteration of the hiding operations on arenas and j-sequences; • IE-switch states that only P can change a priority order during a play because internal moves are 'invisible' to O, where the same remark as in E4 is applied for its finer distinction of priority orders than the I/E-parity.
Note that a dynamic legal position in which no internal move occurs is equivalent to a static legal position.
Convention Legal positions henceforth mean dynamic legal positions.

Games
We are now ready to recall dynamic games: Definition 18 (Dynamic games [71]) A dynamic game is a quintuple G = (M G , λ G , G , G , P G ) such that the quadruple (M G , λ G , G , G ) is an arena, and P G is a subset of L G whose elements are called (valid) positions of G that satisfies: • (P1) P G is non-empty and prefix-closed (i.e., sm ∈ P G ⇒ s ∈ P G ); Remark In [71], each dynamic game G is equipped with an equivalence relation G on its positions in order to ignore permutations of 'tags' for exponential ! as in [7] and Section 3.6 of [51]. Naturally, dynamic strategies σ : G are identified up to G , i.e., the equivalence class [σ ] of σ with respect to G is a morphism in the bicategory of dynamic games and strategies [71], which matches the syntactic equality on terms. However, our notion of 'effective computability' or viability (Definition 70) is defined on dynamic strategies, not their equivalence classes, and our focus is not a fully complete interpretation of a programming language; thus, we do not have to take such equivalence classes at all. Hence, for simplicity, we exclude such equivalence relations on positions from the structure of dynamic games in the present paper. Of course, we may easily adopt the full definition of dynamic games G (i.e., with G ) and equivalence classes [σ ] of dynamic strategies σ : G as in [71]: We may simply define [σ ] to be viable if there is some viable representative τ ∈ [σ ].
Thus, dynamic games are static games defined in [6,51] except that their arenas are dynamic ones and they additionally satisfy the axiom DP2. The axiom P1 talks about the natural phenomenon that each non-empty position or 'moment' of a play must have the previous 'moment. ' In addition, by the axiom DP2 for dynamic games, internal O-moves must be performed as dummies of the last internal P-moves, where the pointers specified by the axiom would make sense if one considers the example of composition of succ and double without hiding in the introduction. Conceptually, we impose the axiom for O cannot 'see' internal moves, and thus the internal part of each play must be essentially P's calculation only; technically, it is to ensure external consistency of dynamic strategies: Dynamic strategies act always in the same way from the viewpoint of O, i.e., the external part of each play by a dynamic strategy does not depend on the internal part (see [71] for the details).
Remark The axiom DP2 defined in [71] just requires determinacy of internal O-moves in each play (it is similar to determinacy of P-moves for strategies), which works for the purpose of the work. However, in the present paper, we are concerned with 'effective computability' of strategies, and thus in particular computation of internal O-moves by P must be 'effective' (since O cannot compute them). For this point, we have strengthened the axiom DP2 as above so that computation of internal O-moves becomes trivial.
Convention Henceforth, games refer to dynamic games by default. , where each answer (i.e., [tt] or [ff ]) is performed for the initial question [q]. The former (resp. the latter) is to represent the truth value true (resp. false).

Example 22
The natural number game N is defined by: The position [q][n] is to represent the natural number n ∈ N, where the answer [n] is performed for the question [q]. This is the formal definition of N sketched in the introduction though we have slightly changed the notation for the moves.
However, although the game N is standard in the literature, the 'content' of N is almost the same as that of the set N of all natural numbers except the trivial one-round communication between the participants. This point is unsatisfactory because: 1. It is difficult to define an intrinsic, non-inductive, non-axiomatic notion of 'effective computability' of strategies on games generated from N via the construction ⇒ of function space (which will be given shortly) since there is no intensional or low-level structure in N (see, e.g., [4] for this point); 2. The game N contributes almost nothing new to foundations of mathematics.
Motivated by these points, we adopt the following 'lazy' variant:

Example 23
The lazy natural number game N is defined by:  The game N defines natural numbers in an intuitively natural manner, namely as 'counting processes,' where our choice of notation for moves is inessential, i.e., N is syntaxindependent. Moreover, we may actually define it intrinsically, i.e., without recourse to the set N, by specifying its positions inductively: [no] ∈ P N ). Thus, we may define (rather than represent) natural numbers to be positions of N though we will not investigate foundational consequences of this definition in the present paper.
As we shall see, such step-by-step processes underlying natural numbers allow us to define 'effective computability' of strategies on natural numbers in an intrinsic, noninductive, non-axiomatic manner in Sect. 3 Proof Based on Lemma 15; see [71].
Convention Thanks to Theorem 25, the i-hiding operation H i on games for each i ∈ N can be thought of as the i-times iteration of the 1-hiding operation H 1 , which we call the hiding operation (on games) and write H for it.

Constructions on games
Now, let us recall the constructions on games given in [71] with 'tags' formalized by outer and inner tags defined in Sect. 2.1. A tag refers to an outer or inner tag.
On the other hand, for readers who are not familiar with game semantics, we first give a rather standard presentation of each construction, which keeps 'tags' informal and unspecified, before its formal definition. For this aim, we employ: Notation Let S and T be sets, and we write S + T for their disjoint union. Then, we write x ∈ S + T if x ∈ S or x ∈ T , where we cannot have both x ∈ S and x ∈ T by the implicit 'tag' for the disjoint union S + T . Also, given functions f : S → U and g : T → U , we write [f, g] for the function S + T → U that maps x ∈ S + T to f (x) ∈ U if x ∈ S, and to g(x) ∈ U otherwise (n.b., it is generalized to more than two functions in the obvious manner). Moreover, given relations R S ⊆ S × S and R T ⊆ T × T , we write R S + R T for Let us begin with tensor (product) ⊗. As mentioned in the introduction, a position of the tensor A ⊗ B of given games A and B consists of a position of A and a position of B played 'in parallel without communication.' More precisely, the tensor A ⊗ B is given by: where s A (resp. s B) denotes the j-subsequence of s that consists of moves of A (resp. B). As an illustration, recall the example N ⊗ N in the introduction, in which the 'tags' are informally written as (_) [i] (i = 0, 1).
As explained in [3], it is easy to see that during a play of the tensor A ⊗ B only O can switch between the component games A and B (by alternation).
Let us now give the formal definition of tensor, for which the 'tags' (_) [0] and (_) [1] are formalized by inner tags (_, W ) and (_, E ), respectively: Definition 26 (Tensor of games [6]) The tensor (product) A ⊗ B of games A and B is defined by: Example 27 Some typical plays of the tensor N ⊗ N are as follows: Next, the linear implication A B is the space of linear functions from A to B in the sense of linear logic [27], i.e., they consume exactly one input in A to produce an output in B (n.b., strictly speaking, it is an affine implication as explained in the introduction). Usually, the linear implication A B is given by: for any game G; As an illustration, recall the example of N N in the introduction. Note that the domain A must be normalized into H ω (A) since otherwise the linear implication A B may not satisfy the axiom DP2. It conceptually makes sense too for the roles of P and O in A are exchanged, and thus P should not be able to 'see' internal moves of A. Note also that A B is almost A ⊗ B if A is normalized except the switch of the roles in A; dually to A ⊗ B, only P can switch between A and B during a play of A B (see [3] for the proof). Surprisingly, this simple point changes A ⊗ B into A B. Similarly to tensor, the formal definition of linear implication is as follows: Definition 28 (Linear implication between games [6]) The linear implication A B between games A and B is defined by: otherwise; where pointers in s from initial occurrences of A to those of B are deleted in s W and s E .

Example 29
Any game B and the linear implication T B coincide up to tags. Also, some typical plays of the linear implication 2 2 are as follows: Note that the left diagram describes a strict linear function, i.e., a one that asks an input before producing an output, while the right diagram does a non-strict one.
Next, let us recall product & of games. As stated in the introduction, a position of the product A&B is essentially a position of A or B; it is given by: Similarly to the case of tensor, we formalize product as follows: Definition 30 (Product of games [6]) The product A&B of games A and B is given by: For the cartesian closed bicategory DG of dynamic games and strategies defined in [71], however, we have to generalize the construction C A&B on normalized games A, B and C, where & precedes , because we need to pair strategies σ : L and τ : R such that H ω (L) C A and H ω (R) C B, and the ambient game of the pairing σ , τ would be such a generalization of C A&B. For this point, [71] defines the pairing L, R of such games L and R by: ⇔ m L n ∨ m R n; where given a function f : X → Y and a subset Z ⊆ X we write f Z : X\Z → Y for the restrictions of f to the subset X\Z ⊆ X.
Note that the pairing L, R does not depend on the choice of the normalized games A, B and C such that These tags are of course not canonical at all, but they would certainly achieve the required subgame relation H ω ( L, R ) C A&B. Then, we formalize the labeling function, the enabling relation and the dummy function of L, R by the obvious pattern matching on inner tags; positions of L, R are formalized in the obvious manner. However, the enabling relation is rather involved; thus, for convenience, we define the peeling peel L,R (m) ∈ M L ∪ M R of each move m ∈ M L,R such that changing the inner tag of peel L,R (m) as defined above results in m, and also the attribute att L,R (m) ∈ {L, R, C} of m by: The enabling relation m L,R n is then easily defined as the conjunction of: • att L,R (m) = att L,R (n) ∨ att L,R (m) = C ∨ att L,R (n) = C; • peel L,R (m) L peel L,R (n) ∨ peel L,R (m) R peel L,R (n).
Formally, we define pairing of games as follows: Definition 31 (Pairing of games [71]) The pairing L, R of games L and R such that H ω (L) C A and H ω (R) C B for any normalized games A, B and C is given by: [a] f ).
Example 32 Some typical plays of the pairing 2 2, 2 2 are as follows: Next, let us recall exponential ! in the sense of linear logic, i.e., !A df .
= A ⊗ A ⊗ . . . The exponential !A is usually given by: = {s ∈ L !A | ∀i ∈ N.s i ∈ P A }, where s i is the j-subsequence of s that consists of moves (a, i) yet changed into a.
A naive idea is then to formalize each 'tag' (_, i) for exponential by an effective tag [_] i (Definition 1), but as mentioned before, we need to generalize it to an extended effective tag [_] f (Definition 3). Thus, we formalize exponential as follows: Definition 33 (Exponential of games [6,51]) The exponential !A of a game A is defined by:  Similarly to the case of pairing, exponential is generalized in [71]: Given a game G such that H ω (G) !A B for some normalized games A and B, there is the promotion G † of G such that H ω (G † ) !A !B. In fact, promotion is a generalization of exponential because (!T B) † and !B coincide up to tags for any normalized game B; see [71] for the proof. Promotion of games is defined in [71] because a morphism A → B in the bicategory DG is a strategy φ : G such that H ω (G) !A B, and therefore, it is necessary to take a generalized promotion φ † (Definition 57) for composition of strategies in DG, whose ambient game is G † .
The promotion G † is simply given by: , k); where note again that this way of formalizing 'tags' is far from canonical, but it certainly achieves the required subgame relation H ω (G † ) !A !B. Then, the labeling function, the enabling relation and the dummy function of G † are again defined by pattern matching on inner tags in the obvious manner, for which like the case of pairing we use peeling and attributes just for convenience. Also, positions of G † are given by a straightforward generalization of those of exponential defined in Definition 33. Formally, we define promotion of games as follows: [71]) Given a game G with H ω (G) !A B for some normalized games A and B, the promotion G † of G is given by: = {s ∈ L G † | ∀g ∈ T . s g ∈ P G ∧ (s g = ⇒ ∀h ∈ T . s h = ⇒ ede(g) = ede(h))}, where s g is the j-subsequence of s that consists of moves x such that att G † (x) = g yet changed into peel G † (x).

Now, let us recall concatenation of games, which was first introduced in [71]: Given games J and K such that H ω (J )
A B and H ω (K ) B C for some normalized games A, B and C, the concatenation J ‡K of J and K is given by: [1] ) be the 'tag' on B in J (resp. K ); = Max(μ(J ), μ(K )) + 1; Note that moves of B (in J or K ) become internal in J ‡K , and therefore, they would be deleted by the hiding operation H on games. Note also that the concatenation J ‡K does not depend on the choice of normalized games A, B and C such that H ω (J ) A B and H ω (K ) B C. Concatenation corresponds to composition without hiding in the introduction, and it plays a central role in [71]. We shall see later that concatenation σ ‡τ of strategies σ : J Again, this implementation of 'tags' is not canonical at all, but the point is that it achieves the required subgame relation H ω (J ‡K ) A C. Then, the labeling function, the enabling relation and the dummy function of J ‡K are defined by the obvious pattern matching on inner tags, and positions of J ‡K are defined as usual. Formally, concatenation of games is defined as follows, where it should be clear how the peeling peel J ‡K and the attributes att J ‡K work:  [1] ∈ pr B }, where s J (resp. s K ) is the j-subsequence of s that consists of moves m such that att J ‡K (m) = J (resp. att J ‡K (m) = K ) yet changed into peel J ‡K (m), s B [0] , B [1] is the j-subsequence of s that consists of moves of B [0] or B [1] , i.e., moves [((b, X) Example 37 A typical plays of the concatenation (N N ) ‡(N N ) is: Finally, let us recall the rather trivial currying and uncurrying [6] of games. Roughly, currying generalizes the map where A, B and C are arbitrary normalized games. Note that the games A ⊗ B C and A (B C) coincide up to 'tags,' and therefore, the currying and the uncurrying operations boil down to the trivial manipulations on 'tags. ' Nevertheless, we formalize such manipulations of 'tags' here. For their simplicity, let us skip their informal definitions and just present the formal ones: Definition 38 (Currying of games [71]) If a game G satisfies H ω (G) A ⊗ B C for some normalized games A, B and C, then the currying (G) of G is given by: = λ H (peel (H) (x)) for all x ∈ M (H) , where the map peel (H) : is not yet normalized, and H d ( C for some normalized game B.

Strategies
Next, let us recall another central notion of strategies.

Strategies
Our strategies are the dynamic variant introduced in [71]. However, there is nothing special in the definition: A dynamic strategy on a (dynamic) game is a strategy on the game in the conventional sense [6,51], i.e., Definition 42 (Dynamic strategies [6,51,71]) A dynamic strategy σ on a (dynamic) game G, written σ : G, is a subset σ ⊆ P Even G that satisfies: • (S1) It is non-empty and even-prefix-closed (i.e., smn ∈ σ ⇒ s ∈ σ ); • (S2) It is deterministic (i.e., smn, s m n ∈ σ ∧ sm = s m ⇒ smn = s m n ).
Convention Henceforth, strategies refer to dynamic strategies by default.
Notation Henceforth, we often indicate the form of tags of moves [m X 1 X 2 ...
Example 43 Given a natural number n ∈ N, the nth numeral strategy is the strategy n :

Example 44
The successor strategy succ : where y and n abbreviate yes and no, respectively. Note that it is a formalization of the successor strategy in the introduction.

Example 45
The predecessor strategy pred : It is easy to see that pred implements the predecessor function 0 → 0, n + 1 → n.
Next, let us recall two constraints on strategies: innocence and well bracketing. One of the highlights of HO-games [38] is to establish a one-to-one correspondence between terms of PCF in a certain η-long normal form, known as PCF Böhm trees [8], and innocent, well-bracketed strategies (on games modeling types of PCF). That is, the two conditions limit the codomain of the interpretation of PCF, i.e., the category of HO-games, in such a way that the interpretation becomes full.
Roughly, a strategy is innocent if its computation depends only on the P-view of each odd-length position (rather than the entire position), and well bracketed if every 'questionanswering' by the strategy is done in the 'last-question-first-answered' fashion. Formally: Definition 46 (Innocence of strategies [6,38,51] Definition 47 (Well bracketing of strategies [6,38,51]) A strategy σ : G is well bracketed if, whenever sqta ∈ σ , where q is a question that justifies an answer a, every question in t defined by sqt G = sq G .t 12 justifies an answer in t .
The bijective correspondence holds also for the game model [6], on which our games and strategies are based. Moreover, it corresponds to modeling states and control operators to relaxing innocence and well bracketing in the model; in this sense, the two conditions characterize 'purely functional' languages [6].
Note that innocence and well bracketing have been imposed on strategies in order to establish full abstraction and/or definability [15], but neither is our main concern in the present paper. However, we would like P to be able to collect a bounded number of 'relevant' moves from each odd-length position in an 'effective' fashion; for this point, it is convenient to focus on innocent strategies since it then suffices for P to trace back the chain of justifiers. In fact, we shall define our notion of 'effective computability' only on innocent strategies in Sect. 3.
On the other hand, we do not impose well bracketing on strategies (thus, control operators are 'effective' in our sense); nevertheless, we shall consider only strategies modeling terms of PCF in the present work, which are all well bracketed.
Remark We conjecture that it is possible to define 'effectivity' of non-innocent strategies in a fashion similar to the case of innocent ones defined in Sect. 3.1. For this, however, we need to modify the procedure for P to collect a bounded number of moves from each oddlength position (defined in Sect. 3.1) so that she may refer to moves outside of P-views, which is left as future work.
Convention From now on, a strategy refers to an innocent strategy by default. We may clearly regard innocent strategies σ : G as (partial) view functions f σ : P Odd G G M G with the pointer structure implicit (see [51] for the details); we shall freely exchange the tree-representation σ and the function representation f σ , and often write σ for f σ .
As in the case of games, we now define the hiding operation on strategies. Note that an even-length position is not necessarily preserved under the hiding operation on jsequences. For instance, let smnt be an even-length position of a game G such that sm (resp. tn) consists of external (resp. internal) moves only. By IE-switch on G, m is an O-move, and so H ω (smnt) = sm is of odd-length. Thus, we define: Definition 48 (Hiding on strategies [71]) Given a game G, a position s ∈ P G and a number d ∈ N ∪ {ω}, let

The d-hiding operation H d on strategies is given by H
The following beautiful theorem in a sense implies that the above definition is a reasonable one. Also, it induces the hiding functor H ω from the bicategory DG of dynamic games and strategies to the category G of static games and strategies [71]. Convention Thanks to Theorem 49, the i-hiding operation H i on strategies for each i ∈ N can be thought of as the i-times iteration of the 1-hiding operation H 1 , which we call the hiding operation (on strategies) and write H for it.
It is straightforward to see that normalized games (resp. strategies) are equivalent to static games (resp. strategies) given in [6]; see [71] for the details.

Constructions on strategies
Next, let us review standard constructions on strategies [6,51], for which we need to adopt our tags. Having introduced our formalization of 'tags' for constructions on games in Sect. 2.2.3, let us just present formalized constructions on strategies (without standard, informal versions) as it should be clear enough.
First, the following derelictions just 'copy-cat' the last occurrence of an O-move: Definition 50 (Derelictions [6,7,51]) The dereliction der A : A ⇒ A on a normalized game A is defined by: [a ] e ).

Example 51
The computation of the dereliction der A may be depicted as follows: [a (4) E ] e (4) . . .
where [a (1) ] e (1) [a (2) ] e (2) [a (3) ] e (3) [a (4) ] e (4) · · · ∈ P A . Next, as in the case of tensor of games, we have: The next one is the pairing in the category G of static games and strategies [6]: Definition 54 (Pairing of strategies [6,7,38]) Given normalized games A, B and C, and normalized strategies σ : C A and τ : C B, the pairing σ , τ : C A&B of σ and τ is defined by: It is clearly a generalization of static pairing; consider the case where L = C A and R = C B.
Convention Henceforth, pairing of strategies refers to the generalized one.
Next, let us recall promotion of strategies: Definition 57 (Promotion of strategies [6,7,51]) Given normalized games A and B, and a normalized strategy φ : !A B, the promotion φ † : !A !B of φ is given by: As stated before, [71] generalizes promotion of strategies (for the reason explained before Definition 35) as follows: Definition 58 (Generalized promotion of strategies [71]) Given a strategy φ : G such that H ω (G) !A B for some normalized games A and B, the generalized promotion φ † : G † of φ is defined by: Convention Henceforth, promotion of strategies refers to the generalized one. Note that there are two threads 13 in the above play, and the strategy succ † behaves as succ in both of the threads. Now, let us recall a central construction of strategies in [71], which reformulates composition (with hiding) of strategies as follows: Definition 60 (Concatenation and composition of strategies [71]) Let σ : J and τ : K such that H ω (J ) A B and H ω (K ) B C for some normalized games A, B and C. The concatenation σ ‡τ : J ‡K of σ and τ is defined by: , s B [0] , B [1] ∈ pr B } and their composition σ ; τ : H ω (J ‡K ) by σ ; τ df .
We also write τ • σ for σ ; τ . If J = A B, K = B C, then our composition σ ; τ : H ω (_)(A B) ‡(B C) A C coincides with the standard one [6,38,51]; see [71] for the details. In this sense, our composition generalizes the standard one, and moreover it is decomposed into concatenation plus hiding.

Viable strategies
We have defined our games and strategies in the previous section. In this main section of the present paper, we introduce a novel notion of 'effective' or viable strategies, and show that viable (dynamic) 14 strategies subsume all computations of the programming language PCF [58,61], and thus they are Turing complete in particular. In Sect. 3.1, we define viability of strategies and show that it is preserved under all the constructions on strategies defined in Sect. 2.3.2. We then describe various examples of viable strategies in Sect. 3.2, based on which we finally prove in Sect. 3.3 that viable (dynamic) strategies may interpret all terms of PCF.

Viable strategies
The idea of viable strategies is as follows. First, it seems necessary to restrict the number of previous occurrences which P is allowed to look at (to calculate the next P-move) to a bounded one since the number of odd-length positions of a game can be infinite, e.g., consider the game N (Example 23). 15 Fortunately, to model the language PCF, it turns out that strategies only need to read off at most the last three occurrences of each P-view (and possibly a few initial or internal moves, which are easily identified as well) as we shall see, which is clearly 'effective' in an informal sense. Thus, it remains to formulate how strategies 'effectively' compute the next P-move from such a bounded number of previous occurrences. Note that (as already mentioned) computation of internal O-moves should be done by P, but it is rather trivial by the axiom DP2 (Definition 18), and therefore, we shall omit it for brevity. Note also that we shall focus on innocent strategies as a means to narrow down previous occurrences to be concerned with. 16 As the set π 1 (M G ) is finite for any game G (Definition 8), innocent strategies that are finitary in the sense that their view functions are finite seem sufficient at first glance. However, to model fixed-point combinators in PCF, strategies need to initiate new threads unboundedly many times [7,38] (Example 75); also, 'effective' strategies have to be closed under promotion (Definition 57) for modeling PCF, in which possible outer tags are infinitely many. Thus, finitary strategies are not strong enough.
Then, how can we give a stronger notion of 'effectivity' of the next P-move from (a bounded number of) previous occurrences solely in terms of games and strategies? Our solution, which is the main achievement of the present work, is to define a strategy σ : G to be 'effective' or viable (Definition 70) if it is 'describable' by a finitary strategy, called an instruction strategy for σ (Definition 68), on the instruction game for G (Definition 65); see Sect. 1.7 for an illustration of the idea.
Having explained the idea of viable strategies, let us proceed in the present section to make it mathematically precise.
Notation Given a game G, we assign a symbol m to each m ∈ π 1 (M G ), for which we may assume that these symbols are pairwise distinct for the set π 1 (M G ) is finite, and define

= ∅;
15 Note that this is analogous to the computation of a TM which looks at only one cell of an infinite tape at a time. 16 Of course, there might be another way to 'effectively' eliminate irrelevant occurrences from the history of previous occurrences; in fact, we need more than P-views to model languages with states [6], which is left as future work.
However, there remain two problems. The first one is the pairing σ , τ : L, R of strategies σ : L and τ : R such that H ω (L) C ⇒ A and H ω (R) C ⇒ B for some normalized games A, B and C: Because moves of C are common to σ and τ , the last three occurrences of each P-view may not suffice; the pairing σ , τ needs to know whether A or B the first occurrence of each position of L, R belongs to. Also, the occurrence becomes no longer initial as soon as the pairing is post-concatenated; thus, it does not suffice to trace the first occurrence of each position. We shall overcome this point by collecting necessary information as states (Definition 67).
The second one is how to 'effectively' calculate the 'relevant' (and finite) part of outer tags represented by moves occurring in an instruction game (for the number of all outer tags is infinite). For this point, we introduce the notion of m-views: It is clearly 'effective' to calculate the m-view of a given position of an instruction game in an informal sense. For instance, deterministic pushdown automata [35,48,64] may compute m-views into the stack, where we assume that positions of games are written on the input tape, in the obvious manner. We may even dispense with a stack by embedding the depth d of each occurrence of or by the d-times iteration of q. right after the occurrence in positions of instruction games (for which we need to slightly modify the notion of instruction games accordingly). Nevertheless, for simplicity, we shall not specify a method for the calculation of m-views. We are now ready to make the notion of 'describable by a finitary strategy' precise: • S A ⊆ π 1 (M G ) * is a finite set, whose elements are called states; equipped with the query (function) Q A : π 1 (M G ) → { , ⊥} that satisfies: Remark Note that it does not make a difference if each st-algorithm A::G focuses on the P-views of each tx ∈ P Odd G(M G ) 3 ⇒G(M G )&2 for tx = tx by the trivial pointers.
Definition 68 (Instruction strategies) Given a game G, an st-algorithm A::G and a state m ∈ S A , the instruction strategy A m of A at m is the strategy on the game G(M G ) 3 ⇒ G(M G )&2 defined by: where the justifier of y in txy is the obvious, canonical one.
Convention Given an st-algorithm A::G, each instruction strategy A m has to specify pointers in P G , which in the present work are always either the last or the third last occurrence, or the justifier of the second occurrence of the P-view of each odd-length position of G (as we shall see); it is why A m is on the game , so that it may specify the ternary choice on the justifiers in the component game 2 (by tt, ff or 'no answer'). However, since justifiers in P G occurring in this paper are all obvious ones, we henceforth regard A m and A m as , respectively, keeping the justifiers implicit.
Remark Since an st-algorithm A::G refers to m-views only occasionally, we regard each A m as a partial function cases. Accordingly, A m is mostly a strategy on the game G(M G ) 3 ⇒ G(M G ) whose partial function representation A m is finite.
Thus, an instruction strategy is a strategy on the game G(M G ) 3 ⇒ G(M G ), where G is a game, that is finitary in the sense that it is representable by a finite partial function, and so it is clearly 'effective' in an informal sense. We shall see that the number 3 on G(M G ) 3 is the least number to achieve Turing completeness in Sect. 3.3. As already mentioned, our idea is to utilize such an instruction strategy as a 'description' of a strategy on G, which may be 'effectively' read off: Definition 69 (Realizability) The strategy st(A) : G realized by an st-algorithm A::G is defined by: That is, a strategy σ : G is viable if there is a finitary strategy on G(M G ) 3 ⇒ G(M G ) that 'describes' the computation of σ . The terms realize and realizability come from mathematical logic, in which a realizer refers to some computational information that 'realizes' the constructive truth of a mathematical statement [65].
Given an st-algorithm A::G that realizes a strategy σ : G, P may 'effectively' execute A to compute σ roughly as follows: 1 Given sa ∈ P Odd G , P calculates the current state m df .
= Q A ( sa ) and the last (up to) three moves sa 3 of the P-view; if m / ∈ S A , then she stops, i.e., the next move is undefined; 2 Otherwise, she composes ( sa 3 3 ) † with A m , calculating A m • ( sa 3 3 ) † ; 3 Finally, she reads off the next move M(A m • ( sa 3 3 ) † ) (and its justifier) and performs that move (with the pointer).
For conceptual clarity, here we assume that P may write down moves [m] e in P-views as [m] C * (e) and execute strategies on instruction games symbolically on her 'scratch pad,' and also she may read off strategies σ : G(M G ) on the 'scratch pad' and reproduce them as moves M(σ ) ∈ M G . This procedure is clearly 'effective' in an informal sense, which is our justification of the notion of viable strategies. Note that there are two kinds of processes in viable strategies σ : G. The first one is the process of σ per se whose atomic steps are (sa ∈ P Odd G ) → sa.σ ( sa ), and the second one is the process of an st-algorithm A realizing σ whose atomic steps are where m is the current state. The former is abstract and high-level, while the latter is symbolic and low-level. In this manner, we have achieved a mathematical formulation of high-level and low-level computational processes and 'effective computability' of the former in terms of the latter (as promised in the introduction).
Henceforth, in order to establish Theorem 76, which is a key result for the main theorem (Theorem 81), we shall focus on the following st-algorithms: Definition 71 (Standard st-algorithms) An st-algorithm A::G is standard if: 1. The symbol does not occur in A m for any m ∈ S A ; 2. It does not refer to any input outer tag when it computes an inner element, i.e., if q [3] .s.n [3] ∈ A m : G(M G ) [0] &G(M G ) [1] &G(M G ) [2] ⇒ G(M G ) [3] , where m ∈ S A and n ∈ π 1 (M G ), then q [0] , q [1] and q [2] do not occur in s; 3. If it refers to an input outer tag, then it must belongs to the last move of the current P-view of G, i.e., if q occurs as a P-move in some s ∈ A m , where m ∈ S A , then the 'tag' on the move is (_) [2] .

Fig. 5 A play by the fixed-point strategy fix A : (A ⇒ A) ⇒ A
A typical play by fix A is depicted in Fig. 5 ⇔ m ∈ π 1 (M Init (A⇒A)⇒A ) and S A(fix A ) df .
= π 1 (M Init (A⇒A)⇒A ). Since A(fix A ) m does not depend on m, fix an arbitrary state m ∈ S A(fix A ) . We proceed by a case analysis on the rightmost component of input strategies on G(M (A⇒A)⇒A ) 3 for A(fix A ) m (which corresponds to the last occurrence of the P-view of each odd-length position of the game (A ⇒ A) ⇒ A): Let us define the finite set S A(σ ⊗τ ) of states and the query Q A(σ ⊗τ ) by: simply by changing symbols m X ∈ Sym(π 1 (M A C )) and n Y ∈ Sym(π 1 (M B D )) into m WX and n EY , respectively, in their finite tables, where the view-scopes are defined by: | and the mate-scopes are defined similarly. Then, because a P-view of σ ⊗ τ is either a P-view of σ or τ (which is shown by induction on the length of positions of σ ⊗ τ ), it is straightforward to see that st(A(σ ⊗ τ )) = σ ⊗ τ holds. Also, it is clear that A(σ ⊗ τ ) is standard if so are A(σ ) and A(τ ). Intuitively, A(σ ⊗ τ ) sees the new digit (W or E ) of the current state s ∈ S A(σ ⊗τ ) and decides A(σ ) or A(τ ) to apply (n.b., since Q A(σ ⊗τ ) tracks every initial move by the axiom Q, the state must be non-empty).
It is clear that pairing of strategies may be handled in a completely similar manner; currying and uncurrying are even simpler. Thus, we skip the proof for them. Now, consider the concatenation ι ‡κ : J ‡K of viable strategies ι : J and κ : K such that H ω (J ) A B and H ω (K ) B C for some normalized games A, B and C. Let A(ι) and A(κ) be standard st-algorithms such that st(A(ι)) = ι and st(A(κ)) = κ. We define the set S A(ι ‡κ) of states and the query Q A(ι ‡κ) by: We construct the finite partial function A(ι ‡κ) n (k) (as well as the view-and the mate-scopes) from A(κ) n (k) otherwise, by modifying the symbols in the table similarly to the case of tensor (where the view-and the mate-scopes are just inherited). Again, Q A(ι ‡κ) clearly satisfies the axiom Q. Because a P-view of ι ‡κ is a one of κ or a one of ι followed by a one of κ (it is crucial here that Q A(ι) tracks initial moves by the axiom Q, and does not occur in A(ι) for it is standard), we may conclude that st(A(ι ‡κ)) = ι ‡κ. Moreover, A(ι ‡κ) is clearly standard as so are A(ι) and A(κ).
Finally A(ϕ † ) s calculates the first half n. C * (ê) of n.C * (e ) by simulating the computation of n by A(ϕ) s and referring to C * (ẽ ), and then computes the remaining half C * (h) by simulating the computation of C * (e) by A(ϕ) s . Again, A(ϕ † ) s is clearly standard. • The remaining two cases are completely analogous to the above cases.
It should be clear from the above description how to construct A(ϕ † ) from A(ϕ), completing the proof.
With the help of m-views, there is clearly a finite table A(fix A ) m that implements A(fix A ) m . It is then not hard to see that st(A(fix A )) = fix A holds, showing that fix A is viable. Also, it is easy to see that A(fix A ) is standard.

Turing completeness
In the last two sections, we have seen through examples that each 'atomic' strategy definable by PCF [6,71] is viable, and it is realized by a standard st-algorithm. In addition, 1. Copy the last occurrence [m 1 ] e (1) of the current P-view on the second tape onto the initial cells of the third tape, calculate its identifier and m-view, possibly utilizing the sixth tape as a working tape, and write them down on the fourth and the fifth tapes, respectively, in the obvious manner, erasing all contents of the sixth tape after the calculation; 2. Locate the second last occurrence [m 2 ] e (2) of the current P-view on the second tape by the identifier associated with the occurrence [m 1 ] e (1) and then execute the same computation as the one on [m 1 ] e (1) , where the new contents prefixed with $ on the third, the fourth and the fifth tapes are concatenated to the existing contents; 3. Similarly, locate the third last occurrence [m 3 ] e (3) of the current P-view on the second tape (which is easy as it locates next to [m 2 ] e (2) ) and execute the same computation on it (so that the third, the fourth and the fifth tapes contain all information of the last three occurrences of the P-view); 4. With the contents on the third, the fourth and the fifth tapes, compute the next P-move [m] e and the identifier of its justifier, write them down on the second tape and erase all contents on the third, the fourth and the fifth tapes.
Note that M is clearly able to execute all the computational steps described above, in particular the last step ([m 3 ] e (3) , [m 2 ] e (2) , [m 1 ] e (1) ) → [m] e by simulating the computation of an instruction strategy for φ, completing the proof.
Remark Theorem 85 does not hold for higher-order computation because TMs cannot take additional inputs from O during the course of computation. Of course, one may consider generalized TMs that may interact with O, but then it is no longer TMs in the usual sense; in fact, this idea naturally leads to the game-semantic model of computation developed in the present paper.
Also, as an immediate corollary of Theorem 85, we obtain Corollary 89, for which we first need some auxiliary concepts: Definition 86 (Standard strategies on PCF-games) A PCF-game is a normalized game constructed from N , 2 and/or T , via & and/or ⇒. A strategy σ on a PCF-game G is standard if the substance (Definition 6) of the last move of the j-subsequence s N [i] of each maximal (with respect to ) element s ∈ σ that consists of moves of the same component game N [i] (i.e., moves of N with the same inner tag) is no if s N [i] is non-empty and of even-length.
Next, let us introduce the translation T that maps PCF-games and standard strategies on them to the corresponding conventional games and strategies on them, respectively, simply by replacing N with N :  = Pref({T (s) | s ∈ σ , s is maximal in σ }) Even .
Remark By the specific methods 1.a and 1.b in the step 1 of the translation σ → T (σ ), the relative order between occurrences in different component games of T (G) is automatically determined (and thus T (σ ) is unambiguous). Note also that the strategy σ : G is required to be standard since otherwise some j-subsequence s N ⇔ ∀τ : G ⇒ .τ • σ † T = τ •σ † T for all σ ,σ : G, where σ T ,σ T : T ⇒ G are σ andσ up to tags, respectively, and is the game given by

= Pref({[q][a]}).
See [7,38,51] for the proof that shows G is in fact an equivalence relation on strategies on a given game G. We are finally ready to establish: Corollary 89 (Universality) Let A be the dynamic game semantics of a type A of PCF (following [71]). Given any viable strategy α : A such that H ω (α) : H ω (A) is standard, T (H ω (α)) : T (H ω (A)) coincides with the conventional game semanticsα : T (H ω (A)) of some term of PCF up to intrinsic equivalence [6].
Proof (sketch) Applying the proof of Theorem 85, we may see that T (H ω (α)) : T (H ω (A)) is recursive, i.e., each move performed by T (H ω (α)) is computable by a TM; thus, it is the conventional game semanticsα : T (H ω (A)) of some term of PCF up to intrinsic equivalence by the universality theorem of [7,38].

Conclusion and future work
The present work has given a novel notion of 'computability,' namely viability of strategies. Due to its intrinsic, non-inductive, non-axiomatic nature, it can be seen as a fundamental investigation of 'effective' computation beyond classical computation, where note that viability of strategies makes sense universally, i.e., regardless of the underlying games (e.g., games do not have to correspond to types of PCF).
Furthermore, our game-semantic model of computation formulates both high-level and low-level computational processes and defines 'computability' of the former in terms of the latter, which sheds new light on the very notion of computation. For instance, strategies n : T ⇒ N may be seen as the definition of natural numbers, and thus, a viable strategy of the form φ : N k ⇒ N can be regarded as high-level computation on natural numbers, not on their representations, and (the table of) an st-algorithm that realizes φ can be seen as its symbolic implementation.
There are various directions for further work. First, we need to analyze the exact computational power of viable strategies, in comparison with other known notion of higher-order computability [50]. Also, as an application, the present framework may give an accurate measure for computational complexity [47], where note that the work on dynamic games and strategies [71] has already given such a measure via internal moves, but the present work may refine it further since two single steps in a game G may take different numbers of steps in the instruction game G(M G ) 3 ⇒ G(M G ). Moreover, it is of theoretical interest to see which theorems in computability theory can be generalized by the present framework in addition to the smn and the first recursion theorems. However, the most imminent future work is perhaps, by exploiting the flexibility of game semantics, to enlarge the scope of the present work (i.e., not only the language PCF) in order to establish a computational model of various logics and programming languages. We are particularly interested in how to apply our approach to non-innocent strategies.
Finally, let us propose two open questions. Since the definition of viable strategies is somewhat reflexive (as it is via strategies), we may naturally consider strategies that can be realized by a viable strategy. Let us define such strategies to be 2-viable. More generally, rephrasing viability as 1-viability, we define a strategy to be (n + 1)viable if it can be realized by an n-viable strategy for each n ∈ N. Clearly, any n-viable strategy is 'effective' in an informal sense. Then, the first questions is: Is the class of all (n+1)-viable strategies strictly larger than that of all n-viable strategies for each n ∈ N?
This question seems highly interesting from a theoretical perspective. 23 If the answer is positive for all n ∈ N, then there would be an infinite hierarchy of generalized viable strategies. It is then natural to ask the following second question: Does the hierarchy, if it exists, correspond to any known hierarchy (perhaps in computability theory or proof theory)?
We shall aim to answer these questions as future work as well.