Martin-L\"of reducibility and cost functions

Martin-L\"of (ML)-reducibility compares $K$-trivial sets by examining the Martin-L\"of random sequences that compute them. We show that every $K$-trivial set is computable from a c.e.\ set of the same ML-degree. We investigate the interplay between ML-reducibility and cost functions, which are used to both measure the number of changes in a computable approximation, and the type of null sets used to capture ML-random sequences. We show that for every cost function there is a c.e.\ set ML-above the sets obeying it (called an ML-complete set for the cost function). We characterise the $K$-trivial sets computable from a fragment of the left-c.e.\ random real~$\Omega$. This leads to a new characterisation of strong jump-traceability.


Introduction
Martin-Löf (ML) randomness and K-triviality are antipodal properties of sets of natural numbers.Nonetheless, sets of the two kinds interact in interesting ways via Turing reducibility.For instance, combining the results of [3,6], a c.e. set is K-trivial if and only if it is computable from a Turing incomplete ML-random set; see [2].
Our purpose is to study the relative complexity of the K-trivial sets via a preordering coarser than Turing reducibility that is given by this interaction: A ď ML B Date: February 11, 2022.2010 Mathematics Subject Classification.Primary 03D32; Secondary 03D30, 68Q30.Greenberg and Nies were supported by the Marsden Fund of New Zealand.Greenberg was also supported by a Rutherford Discovery Fellowship from the Royal Society of New Zealand.This research was started during a retreat at the Research Centre Coromandel.
if every ML-random computing B also computes A. This preordering, called MLreducibility, was introduced in [3] by Bienvenu, Kučera, and three of the authors of the present paper.They showed that there is an ML-complete K-trivial (which they called "smart").While the K-trivials appear somewhat amorphous under Turing reducibility, we will show that an interesting structure emerges when they are viewed through the lens of this preordering.Research in this direction was carried out first by three of the authors of the present paper in [14].They described a dense linear hierarchy of natural principal ideals in the ML-degrees of the K-trivials.Our first result, Theorem 3.1, shows that each K-trivial is ML-equivalent to a c.e. K-trivial, so the structure we find on the K-trivials is fully given by c.e. witnesses.Some background on ML-randomness and K-triviality.Sets of natural numbers (often simply referred to as sets) will be identified with infinite bit sequences.MLrandomness is central among the notions of randomness given by algorithmic tests.A set Z Ď ω is ML-random (sometimes just called "random" in this paper) if Z R Ş m G m for any sequence of uniformly Σ 0 1 sets in Cantor space such that the Lebesgue measure of G m is at most 2 ´m.A sequence xG m y of this kind is called an ML-test.
Chaitin's Ω, the halting probability of a universal prefix-free machine, is an example of an ML-random sequence.Note that Ω is Turing equivalent to the halting problem.There are various ways to build a low ML-random sequence.One way is to use the low basis theorem together with the fact that there is a universal ML-test.Another is to take Ω R , the bits of Ω with location in an infinite, co-infinite computable set R; to see that Ω R is low one can e.g.combine [25,Prop. 3.4.10],due to [23], with van Lambalgen's Theorem [29].
Let Kpxq denote the prefix-free descriptive complexity of a string x.One says that a set A is K-trivial if there is a constant b such that KpA ae nq ď Kpnq `b for each n.(Here the "n" in Kpnq is interpreted as a string, for instance the string obtained by writing n in binary.)Note that Kpnq is, up to a constant not depending on n, the lowest complexity possible for a string of length n.So K-trivial sets have minimal initial segment complexity, again up to a constant.The Levin-Schnorr theorem states that a sequence Z is ML-random iff KpZ ae nq ě n ´d for some d only depending on Z.As Kpnq ď 2 log n `Op1q, the notion of K-triviality is indeed antipodal to ML-randomness: K-trivial by definition means far from random.
Eighteen or so characterisations of the K-trivials are known presently, most of them saying that the set is in some sense close to computable.For instance, A is K-trivial iff A is low for ML-randomness, in the sense that each ML-random is ML-random relative to A [24].For more recent ones, A is K-trivial iff for each ML-random Y , the symmetric difference Y △A is ML-random [22]; A is K-trivial iff for all Y such that Ω is Y -random, Ω is Y ' A-random [13].
Despite these characterisations, and the detailed knowledge of the class of Ktrivials they appeared to convey, paradoxically, not much progress had been made on the internal structure of the class since the early papers, such as [18,24].It was known that the K-trivials are downward closed under Turing reducibility, and that they determine an ideal in the Turing degrees that is contained in the superlow degrees, generated by its c.e. members, and has no greatest degree (i.e., it is nonprincipal).In hindsight it appears that Turing reducibility was the wrong preordering to analyse the internal structure.We will use the coarser ML-reducibility to amend this lack of information.
(a) Can one describe the class only referring to properties of its members, rather than to randoms that compute them?(b) Is some set ML-complete for the class?In other words, is the ML-ideal principal?
As an example of a description in (a) consider the original definition [9] of strong jump traceability of a set A, which (as the name indicates) refers to a way to tightly approximate the values of J A , the Turing jump function of A.
A general way to formulate a condition of lowness among the K-trivials is by restricting the changes of computable approximations.Recall that each K-trivial A is ∆ 0 2 and hence has a computable approximation xA s y.A cost function is a computable function c : N ˆN Ñ tr P R : r ě 0u, which is typically chosen to be nondecreasing in s, nonincreasing in x, and to satisfy the limit condition, namely, the asymptotic cost cpxq " sup s cpx, sq approaches 0 as x increases.The idea is that at stage s, the least x such that A s pxq changes incurs the cost cpx, sq, which subsumes the cost of changes at larger numbers at the same stage; a set A obeys a cost function c if it has a computable approximation that is sufficiently "inert" in that it incurs a finite total cost.(This means more than that there are few changes; it also means that changes need to be carried out in "blocks", which saves costs because only the change at the least number is counted.)It is a basic fact that each cost function that satisfies the limit condition is obeyed by a noncomputable c.e. set.Building on previous results, Nies [27,Thm. 4.3] showed that obedience to the cost function c Ω px, sq " Ω s ´Ωx characterizes the K-trivials.
We will show that an affirmative answer to (a) via obedience to a cost function implies an affirmative answer to (b).Given a cost function c that is at least as strong as c Ω , we show as an easy corollary to our second main result, Theorem 4.3, that some c.e. set A obeys c and is ML-complete among the sets obeying c.So if a cost function c describes a complexity class, then that class has an ML-complete member.In particular, this holds for the class of all K-trivials, which therefore determines a principal ML-ideal, a result that was first obtained in [3] using similar methods.(At present this appears to be essentially the only known way to show a complexity class in the K-trivials has an ML-complete member.)Given a low c.e. set B, some c.e. set A ď T B obeys c by [25,Thm. 5.3.22].So the sets obeying c (being K-trivial, and hence low) never form a Turing principal ideal.This lends support to our thesis that, compared to Turing reducibility, the coarser ML-reducibility leads to a more satisfying complexity theory of the K-trivials.
The class of half-bases is a further example of a complexity class that can be described by a cost function.It yields one level of the dense hierarchy of ML-ideals described in [14] which we alluded to earlier on.One says that a set A is a half-base if there is an ML-random Y such that A ď T Y 0 , Y 1 , where Y 0 consists of the bits in the even, and Y 1 of the bits in the odd positions.Note that each half-base is a basis for ML-randomness by van Lambalgen's theorem, and hence K-trivial by [18].By [14,Thm. 1.1.],one can require that Y " Ω; furthermore, by [14,Thm. 1.3.] the class of half-bases can be described by the cost function c Ω,1{2 px, sq " a c Ω px, sq.Larger cost functions are harder to obey, in a sense made precise in [27,Thm. 3.4].So c Ω,1{2 describes a proper subclass of the K-trivials.The properness of the inclusion of the class of half bases in the K-trivials is a result first obtained in [3,Thm. 1.3].By our method, there is an ML-complete half-base.
By definition, a set A is ML-complete for a class C of K-trivials if it has the least class of ML-randoms computing it among the members of C. We build a c.e. set A that is ML-complete for the sets obeying c by showing that ML-randoms computing A cannot be very random, in the sense that they fail a generalised type of ML test called a c-test, where the convergence of the measure of G m to 0 is bounded by Opcpmqq, rather than 2 ´m (recall here that cpmq " sup s cpm, sq).We then use the important, if easy Proposition 2.6 below: any set B obeying c is Turing below any ML-random failing such a test.
Our Theorem 4.3 extends the basic fact that every cost function c is obeyed by a noncomputable c.e. set.The c.e. set A we build obeys c, but "only just", in the sense that the collection of ML-randoms computing A is as small as possible.We call such a set A smart for c, continuing the terminology of [3] for c Ω ; the theorem states that each cost function has a smart set.Then we verify, as an easy consequence of this existence of a smart set, that smartness for c coincides with ML-completeness for c.
The previous works [3,14] and, in particular, the present paper show that far from being an obstacle, the fact that K-trivials are close to computable can be advantageous for the study of their relative computational complexity.Tools can be applied that would not work for computationally more complex ∆ 0 2 sets.The main idea described above is to differentiate between K-trivials via the incomplete randoms that compute them.In contrast, a c.e. set that is not K-trivial only allows randoms above the halting problem to compute it by [18]; such c.e. sets all have the same ML-degree.
The K-trivials Turing below a fragment Ω R of Ω. Bit sequences derived in some way from Chaitin's Ω often play a special role in the algorithmic theory of randomness.
For an infinite computable set R, we have defined above the sequences Ω R of bits of Ω with a location in R. Section 6 introduces a cost function c Ω,R that describes the class of K-trivials computable from Ω R .As a main result of this paper, we show in Theorem 6.6 that this cost function essentially only depends on the function n Ñ |R X n| that gives the number of elements of R less than n, taken up to an additive constant.For instance, if R is the set of even numbers and S the odd numbers, then the cost functions corresponding to R and S are equivalent as far as obedience goes, and hence the K-trivials below Ω R coincide with the K-trivials below Ω S .This can be extended to k{n bases, for 1 ď k ă n, in the sense of [14], where one takes as R any union of k sets of the form nN `r, 0 ď r ă n.Let B k{n be a smart set for the corresponding cost function.Via the results in [14] these sets determine a chain in the ML-degrees of K-trivials that is isomorphic to p0, 1q Q .For detail see the discussion around Theorem 6.1.
As an application of Theorem 6.6, in Theorem 7.1 we show that the K-trivials computable from Ω R are exactly those that obey c Ω,R , as promised.Given a cost function c, an ML-random set Y failing a c-test will be called feeble for c if the only K-trivials it computes are the ones that obey c.This notion is dual to smartness for K-trivials.In this language, we show that Ω R is feeble for c Ω,R .
Structure of the ď ML -degrees of K-trivials.By its definition, ML-reducibility is a weakening of Turing reducibility.The least degree consists of the computable sets.The usual join operation ' induces a least upper bound in the ML-degrees.
Dual to Theorem 4.3, we will show in Proposition 5.1 that each K-trivial A is smart for some cost function c pAq that can be obtained uniformly from A. This yields further degree theoretical information in Corollary 5.3.Firstly, there are no minimal pairs in the ML-degrees of K-trivials.Secondly, no ML-degree of a noncomputable K-trivial contains a maximal Turing degree; in particular, it contains an infinitely ascending chain of Turing degrees.We do not know whether the ML-degree of a noncomputable K-trivial can contain a minimal Turing degree.
As a further, more powerful application of Theorem 6.6 we will show in Theorem 7.4 that every countable partial ordering is embeddable into the ML-degrees of K-trivial sets.Also, based on a method of Kučera [20], we obtain a pair of incomparable degrees below each non-zero K-trivial ML-degree.In fact, we show that for each noncomputable c.e. K-trivial D, there are c.e. sets A, B ď T D such that A | ML B.
Only basic facts are known on ML-reducibility outside the K-trivials.Since each Turing degree above H 1 contains a ML-random, for sets above H 1 the two reducibilities coincide.More generally, for PA-complete sets A, B, we have (We thank the referee for pointing this out.)For the direction from left to right, one uses the result of Stephan [28] that the only PA-complete ML-randoms are the ones above H 1 .
In the final section, Section 9, we connect obedience of the cost functions c Ω,R to strong jump traceability, the aforementioned very strong lowness notion.While the original definition [9] was combinatorial, Hirschfeldt et al. [12] showed that a set is strongly jump traceable iff it is Turing below each ω-c.a.ML-random Y .We prove that A is strongly jump traceable iff it obeys all the cost functions c Ω,R for infinite computable R. In particular, it suffices to let the ω-c.a.sets Y as above range over the fragments Ω R .
Remark 1.1.The correspondence between cost functions and ML-degrees is incomplete.Firstly, a cost function c determines an ML-degree, that of the sets which are smart (equivalently, ML-complete) for c.However, not every set in that degree obeys c.Secondly, every K-trivial set A is smart for some cost function c pAq .However, this cost function is not determined by the ML-degree of A; in fact, in Theorem 5.4 we construct an example of a set A such that even the set that results from A by removing the first bit does not obey c pAq .
Ideally, we could characterise ML-reducibility on K-trivials in terms of which cost functions they obey.This would give a satisfying positive answer to the following question, which remains open: Question 1.2.Is the relation ď ML on the K-trivial sets arithmetical?
In fact, a weaker question remains open: whether ML-completeness among the K-trivals is arithmetical.Another question that remains unsettled is whether the ML-degrees of K-trivials are dense.Given that Question 1.2 remains open, it is hard to envisage a requirement-based construction showing density, as this would need some sort of effective listing of "ML-reduction procedures".Cost functionbased methods appear to be insufficient here.

Some formal definitions and facts
In this section, for easy reference, we provide formal definitions of some of the notions discussed above.We discuss some technical detail and basic connections that will be important for the rest of the paper.We only consider monotonic cost functions (satisfying cpx, sq ď cpx, s `1q and cpx, sq ě cpx `1, sq) that have the limit condition: for all x, cpxq " lim s cpx, sq exists, and lim x cpxq " 0. Further, we assume that cpx, sq " 0 when x ě s.
The original purpose of cost functions was to quantify the number of changes required in a computable approximation of a ∆ 0 2 set A: cpx, sq is the cost of changing at stage s our guess about the value of Apxq.Monotonicity means that the cost of a change increases with time, and that changing the value at a smaller number is more costly.Formally: Definition 2. 3 ([25]).Let xA s y be a computable approximation of a ∆ 0 2 set A, and let c be a cost function.The total c-cost of the approximation is We say that a ∆ 0 2 set A obeys c if the total c-cost of some computable approximation of A is finite.We write A |ù c.For cost functions c and d, we write c Ñ d if A |ù c implies A |ù d for each ∆ 0 2 set A. By [27,Thm. 3.4], this is equivalent to d ď ˆc.The basic existence theorem for cost functions, e.g., described in [27, Thm.2.7(i)], says that if a cost function c has the limit condition, then some non-computable c.e. set obeys A. As mentioned, an important example of a cost function is c Ω px, sq " Ω s ´Ωx , where xΩ s y is an increasing sequence of rational numbers converging to a left-c.e., ML-random real Ω.A set obeys this cost function if and only if it is K-trivial ( [27,Thm. 4.3], which modified a result for a related cost function in [24]).Definition 2.4.Let c be a cost function and let A be a ∆ 0 2 set.We say that A is ML-complete for c if A |ù c, and @B rB |ù c ñ B ď ML As.
Note that the implication arrow only goes from left to right; it is not true in general that the class of sets obeying a cost function is well behaved (see Remark 1.1).
We next add some formal detail to our discussion of Theorem 4.3 above, that each cost function has a smart c.e. set.As mentioned, cost functions can also be used to introduce randomness notions between weak 2-randomness and ML-randomness.Definition 2.5 ([3], Def.2.13).Let c be a cost function.A sequence xV n y of uniformly c.e. open sets such that V n Ě V n`1 for each n is a c-bounded test (or c-test for short) if µpV n q ď ˆcpnq for all n.
We say that such a test captures a set Y if Y P Ş n V n .We also say that such a Y fails the test.A sequence is c-random if it fails no c-test.
The fundamental connection between our two uses of a cost function is the following: To sketch the proof, say Y is covered by the c-test xV n y. define a functional Γ by letting Γ X pnq " A s pnq if X goes into V n at stage s.An A-change threatens to invalidate these definitions, so we build a Solovay test; if Apnq is the least change at stage s, put V n,s into the test.Being ML-random, Y is only in finitely many components of the Solovay test, so Γ Y pnq " Apnq for sufficiently large n.
As mentioned, Kučera showed that every ∆ 0 2 ML-random sequence is Turing above a non-computable c.e. set.Hirschfeldt and Miller in unpublished work dating from 2006 strengthened this: below any Σ 0 3 null class of randoms there is a noncomputable c.e. set.Relying on Proposition 2.6, these proofs can be framed in the language of cost functions; see [15] and [25, 5.3.15],respectively.
The existence of an ML-complete K-trivial was shown in [3].Given that Ktrivials are the sets obeying c Ω , and the ML-randoms computing all K-trivials are the ones that fail some c Ω -test, ML-completeness for K-trivials coincides with being smart for c Ω in the sense of the next definition: Definition 2.7.Let c be a cost function and A be a K-trivial set.We say that A is smart for c if A obeys c and for each ML-random set Y , Y is captured by a c-bounded test ô A ď T Y .Informally, A is as complex as possible for obeying c, in the sense that the only random sets Y above A are the ones that have to be there because of Proposition 2.6.To summarise the discussion in the introduction, in Theorem 4.3, we show that there is a smart set for any cost function c such that obedience to c implies K-triviality.In Corollary 4.4, we use this to show that if c Ñ c Ω then A is smart for c iff A is ML-complete for c.Dual to Theorem 4.3, in Proposition 5.1, we prove that any K-trivial set is smart for some cost function c pAq ; when A is c.e., this will be the strongest cost function obeyed by A.

Inherent enumerability of the K-trivials up to " ML
In this section, we considerably strengthen the result [24] that every K-trivial is computable from a c.e. K-trivial: we show that the c.e. K-trivial can be taken to have the same ML-degree.This is a powerful tool.It is usually easier to prove results for the c.e. K-trivials; extra work is needed to lift results to the general case.Theorem 3.1 simplifies this process in many cases.Indeed, we use it in both Section 5 and Section 8 for this purpose.Theorem 3.1.For every K-trivial set A, there is a (K-trivial) c.e. set D such that D ě T A and D " ML A.
Note that the K-triviality of D is free: every K-trivial is Turing below an incomplete ML-random sequence [2,6], and every c.e. set below an incomplete random is K-trivial.So by virtue of being ML-equivalent to A, the c.e. set D must be K-trivial.
Theorem 3.1 follows from a fact of independent interest.Intuitively, the fact states that each K-trivial A has a computable approximation that converges faster than any computation of A from a random.Lemma 3.2.For every K-trivial set A, there is a computable approximation xA s y of A such that for every ML-random X and Turing functional Φ with A " Φ X the following holds.For sufficiently large n, if A ae n ď Φ X s , then A t ae n " A ae n for every t ě s.
Proof of Theorem 3.1.Assuming that the lemma holds, we argue that a random computing A must also compute a modulus for A; this modulus will have a c.e. degree.Let xA s y be the approximation from the lemma.Let D be the changeset for this approximation: pn, kq P D if and only if there is a sequence of stages s 0 ă ¨¨¨ă s k with A si pnq ‰ A si`1 pnq for all i ă k.D is clearly c.e., and it can compute Apnq by searching for the least k with pn, kq R D and considering the parity of k and the value of A 0 pnq.
Suppose that A " Φ X for some random X.By the lemma, there is N such that for all n ě N , the approximation converges to A ae n faster than Φ X does.Thus X can compute Dpn, kq by waiting until a stage t with A ae n ď Φ X t and then only searching for sequences of stages s 0 ă ¨¨¨ă s k such that s k ď t.For n ă N , we can arrange that our computation knows Dpn, kq by table lookup.
For the rest of the paper, we fix a Turing functional Υ that is universal in the sense that Υ 0 e 1ˆX " Φ X e for each X and e.We assume that for every e, for all sufficiently large n, for all s and X, Φ e,s pX; nqÓñ Υ s p0 e 1ˆX; nqÓ.
Proof of Lemma 3.2.It suffices to prove the lemma for the functional Υ.Let us first give a brief explanation of the proof.We will use the "Main Lemma" derived from the golden run construction [25, 5.5.1].The Main Lemma says that if we design a left-c.e.oracle discrete measure on ω, (equivalently, an adaptive additive cost function, or a prefix-free oracle machine), then there is a computable approximation xA s y of A such that the total of all weights that are believed at some stage of the construction and later are shown to be false is finite.Roughly, we would like, at stage s, to put the weight µpΥ ´1 s rA s ae nsq on the string A s ae n, where µ is Lebesgue measure on Cantor space, and Such an approximation for A will be as required: we can put a Solovay test on the reals that compute A too early, and thus random oracles will only converge and agree with A s ae n after it has settled.
The problem is that this definition does not give a discrete measure: there is no reason to believe that ř n µpΥ ´1rA ae nsq is finite.What we notice, though, is that if an oracle X gives us a correct version of A too early, then this version A s ae n " A ae n will later change to A t ae n ‰ A ae n, but after that will need to change back.We can thus put the weight not on A ae n but on an incorrect version A t ae n.And this is guaranteed to give a measure: the collection of strings of the form pA ae nqˆp1´Apnqq that disagree with A only on the last bit is pairwise incomparable, and so the preimages under Υ of these strings are pairwise disjoint.
We provide the formal details.For σ P 2 ăω with σ ‰ xy, define σ to be the binary string of the same length which disagrees with σ on the final bit, but agrees on all other bits.For example, if σ " 001011, then σ " 001010.For σ P 2 ăω with σ ‰ xy, for brevity, define To avoid the need of repeatedly dealing with xy separately, define U xy " H.Note that pU σ q σP2 ăω are uniformly Σ 0 1 -classes.Also, for σ ň ρ, U σ and U ρ are disjoint.For our argument, we will require a computable approximation to A that obeys an adaptive cost function-a cost function where the cost at a given stage depends on the approximation up to that stage.For xA t y, a computable approximation of A, define c xAty pn, sq " µpU A ae n`1 rssq.Claim 3.2.1.There is a computable approximation xA t y to A that obeys c.That is, if n s is least with A s pn s q ‰ A s`1 pn s q, then ř s c xAty pn s , sq ă 8.
Proof.Uniformly in σ and s, let C σ,s Ă 2 ăω be a finite anti-chain that generates U σ,s , with C σ,s Ď C σ,s`1 .Define an oracle machine M with M σ s pπqÓ for π P C σ,s .Since U σ and U ρ are disjoint for σ ň ρ, M is prefix-free.
Note that for s ą n and any computable approximation xA t y, Fix a computable approximation 9 xA q y to A. By the Main Lemma [25, 5.5.1]derived from the golden run construction, there is a computable sequence qp0q ă qp1q ă ¨¨ẅ ith qp0q ě 1, such that if we define m s to be least with 9 Thus S is a Solovay test.
Let X be random and suppose that A " Φ X e .Then Y " 0 e 1ˆX is random and A " Υ Y .Since Y is not captured by S, we can fix an s 0 such that no B s with s ě s 0 contains an initial segment of Y .Fix N such that for all n ě N , if s , then A t ae n " A ae n for every t ě s.Proof.Suppose n ě N were a counterexample.Let s be such that A ae n ď Υ Y s and t ě s be such that A t ae n ‰ A ae n and A t`1 ae n " A ae n.Note that definitionally, n t ă n.Since A t ae n t " A t`1 ae n t , we know that A t ae n t " A ae n t ă Υ Y s and A t ae n t `1 ‰ A ae n t `1.So { pA t ae n t `1q " A t`1 ae n t `1, and Y P U At ae nt`1 .By assumption, Y has already entered this Σ 0 1 -class by stage s.Since t ě s, B t contains an initial segment of Y , contrary to our choice of N and s 0 .

3.2.2
Since for sufficiently large n, convergence of Φ X e,s up to n implies convergence of Υ Y s up to n, the lemma follows. 3.2

For each cost function there is an ML-complete set
In this section, we prove the existence of a smart set as in Definition 2.7 for each cost function c that implies K-triviality.This yields in Corollary 4.4 the equivalence of smartness for c, and ML-completeness for the class of sets obeying c, and thereby the existence of an ML-complete for the class.
For cost functions c and d, one writes c Ñ d if A |ù c implies A |ù d for every ∆ 0 2 set A. By [27,Thm. 3.4], this is equivalent to c ě ˆd, that is, c multiplicatively dominates d (we may assume cpxq ą 0 for every x).Recall that a set obeys c Ω if and only if it is K-trivial.So all sets obeying a cost function c are K-trivial if and only if c Ñ c Ω .
We start with two simple lemmas.Lemma 4.1.Suppose that aY fails a c-bounded test Ş n V n , where a P t0, 1u.Then Y fails a c-bounded test.
Proof.We may suppose a " 0 and X P V n implies Xp0q " 0. Then µpT rV n sq " 2µpV n q, where T is the usual shift operator on Cantor space, and so xT rV n sy is also a c-bounded test.Clearly Y fails it.
We recall [27] that an additive cost function is a cost function of the form c α pn, sq " α s ´αn , where xα s y is an increasing approximation of a left-c.e.real α.So c α pnq " α ´αn .Since Ω is Solovay complete among the left-c.e.reals, every time we see an increase in α, we can cause a proportional and later increase in Ω. Thus: Lemma 4.2.If c α is an additive cost function, then c Ω Ñ c α .
In particular, 2 ´n ď ˆcΩ pnq.Theorem 4.3.Given a cost function c such that c Ñ c Ω , one can uniformly obtain a c.e. set A which is smart for c.
Proof.Recall that Υ is a "universal" Turing functional in the sense that Υ 0 e 1ˆX " Φ X e for all X and e.We build A and a c-test xU k y capturing any ML-random Y such that A " Υ Y .This suffices for the theorem by Lemma 4.1.The tension in this construction is between trying to capture all reals computing A, and keeping the measure of U n bounded by (a multiple of) cpnq.The idea is for us to move A in case we see that too many oracles compute it.This needs to be done judiciously; we must ensure that A obeys c.The basic idea, as in [3], is to charge the cost of changing A to the increase in the measure of the error set, the set of oracles that have already been proven to be incorrect about A. Since c Ñ c Ω , the increase in the error set is bounded by c, and so we can catch our tail.
We proceed to the details.It will be clear from the proof that the construction is uniform in c.
We apply the usual language for strings: if σ ă L τ we say that σ lies to the left of τ , and τ to the right of σ.By delaying computations from appearing in Υ during the construction, we may assume that for all Y and s, Υ Y s does not lie to the right of A s .We build a global "error set": ( .

An enumeration of a number into
A causes A to move to the right, and so potentially adds elements E; no elements can ever leave E. The basic idea, again, is that we enumerate a number x into A only when the cost cpx, sq is smaller than the amount by which the measure of E will be increased.We will ensure that at every stage s, (˛) µpU k,s q ď cpk, sq `µpE s`1 ´Ek q.
By Lemma 4.2, µpE ´Ek q " c µpEq pkq ď ˆcΩ pkq, so as c Ω pkq ď ˆcpkq, the test xU k y is indeed a c-test.We reserve the interval I k " r2 k , 2 k`1 q for ensuring (˛).
The construction of the c-test xU k y and the c.e. set A is as follows.At stage Let s ą k.We let x s " x s pkq " minpI k ´As q.If (˛) threatens to fail at s, namely µpU k,s q ą cpk, sq `µpE s ´Ek q, we enumerate x s pkq into A s`1 .This causes U k,s to go into E s`1 .Since U k,s is disjoint from E k , it follows in this case that µpU k,s q ď µpE s`1 ´Ek q, and so (˛) holds at stage s.(Now the cycle can repeat: during stages t ě s " 1, the class U k is allowed to add measure up to cpk, tq without any action necessary.If the measure added exceeds cpk, tq another enumeration into A will be needed.)First we verify that x s always exists, that is, we enumerate at most 2 k times for U k .By Lemma 4.2, we may assume that cpx, sq ě 2 ´x for x ă s. (To be clear, here we are using the fact that if c " ˆd, then the same sets obey c and d.)If we enumerate x s pkq into A s`1 , then µpU k,s q ą 2 ´k `µpE s ´Ek q.Since U k,s X E k " H, and U k,s Ď E s`1 , it follows that µpE s`1 ´Es q ą 2 ´k.Since µpEq ď 1, this can happen at most 2 k times.
Recall that for all Y and s, Υ Y s does not lie to the right of A s .Hence, if A " Υ Z then Z P Ş k U k .It remains to verify that A |ù c.If we enumerate x s pkq into A s`1 , then µpU k,s ´Es q " µpU k,s ´pE s ´Ek qq ě µpU k,s q ´µpE s ´Ek q ą cpk, sq ě cpx, sq.
Since U k,s ´Es Ď E s`1 ´Es , we see that cpx, sq ă µpE s`1 ´Es q.This implies that the total cost of the enumeration of A is at most µpEq ď 1.
ML-completeness for a cost function was defined in 2.4.
In particular, the ML-degree of a smart set A for c is uniquely determined by c.In contrast, for each low c.e. set A, there is a c.e. set B ď T A such that B |ù c [25, 5.3.22].If A is smart for c, then A ' B is also smart for c.As every K-trivial is low, the Turing degree of a set A that is smart for c is never uniquely determined by c.

Each K-trivial set is ML-complete for a cost function
Given a K-trivial set A, we will define a cost function c pAq with A |ù c pAq such that every random computing A is captured by a c pAq test.In other words, we build c pAq in such a way that A is smart for c pAq .Furthermore, in case that A is c.e., c pAq is the strongest cost function that A obeys, in the sense that if A |ù c, then c pAq Ñ c.In the introduction we mentioned applications of this to the structure of ML-degree of K-trivials, such as showing that there is no minimal pair.We will provide an example showing that c pAq may not behave in an overly nice way.We build a c.e. K-trivial A such that the class of sets obeying c pAq is not closed downward under ď T .In fact, in our example T pAq * c pAq , where T pAq is the shift of A, obtained by deleting the first bit.
As before, Υ denotes a universal Turing functional.Let A be K-trivial.The idea for defining c pAq is as follows.Suppose first that A is c.e., and let xA s y be an effective enumeration of A that obeys c Ω [27].We want to define c pAq so that we can capture by a c pAq -bounded test all the reals Z such that Υ Z " A. The natural test is U k " Ť sěk Υ ´1 s rA s ae ks.So we define c A pkq " µpU k q.Why does A obey this cost function?Since the approximation is left-c.e., A does not have to pay for the measure of the oracles that compute A ae k correctly: these only appear after A ae k has settled, and after that, all changes to A are beyond k.So A only needs to pay for the measure of those reals Z that compute an incorrect version A s ae k.This price is bounded by the increase of the measure of the error set: those oracles that compute some string to the left of A. Thus the total A-cost is the same as the total A-cost of an additive cost function, and hence bounded by the total c Ω -cost of this enumeration; but this was chosen to be finite.
When A is not c.e., we use a c.e. intermediary.Let us give the details of the definition.By Theorem 3.1, fix a c.e. set C " ML A that computes A; let Ψ be a Turing functional such that A " Ψ C .Fix an enumeration xC s y of C and an approximation xA s y of A that witnesses A |ù c Ω .By speeding up both Ψ and our approximations, we may assume that A s ae s ď Ψ Cs s for every s.To unify our construction with the earlier discussion, we assume that if A is c.e., then C " A and Ψ is the natural reduction with identity bounded use.
Similarly to what we did above, we let we then let Note that c pAq is monotonic, as V x,t Ě V x`1,t .It satisfies the limit condition if A is non-computable: certainly for all x, c A pxq ď 1.If A ae k has stabilised by stage s, Proof.First we show that A obeys c pAq .In fact, the fixed approximation xA s y witnesses this.Define an increasing approximation of the left-c.e."error real" by ε s " µ pE s`1 q .Suppose that A s pxq ‰ A s`1 pxq.For each t P px, ss and every Y P V x,t , Υ Y t lies to the left of C s`1 , and so Y P E s`1 ; on the other hand Y R E t and so Y R E x`1 .It follows that c pAq px, sq ď µ pE s`1 E x`1 q " c ε px, sq.By Lemma 4.2, c Ω Ñ c ε , and so c pAq xA s y ď c ε xA s y ď ˆcΩ xA s y, and we assumed that the latter is finite.
Next we show that every random real that computes A is captured by some c pAq -bounded test.Since C ď ML A, every such real computes C. By Lemma 4.1, it suffices to build a c pAq -test capturing any random Y such that C " Υ Y .The desired test is the test U k " Ť sąk V k,s defined above.(Again, we assume that we delay computations, so for all Y and s, Υ Y s does not lie to the right of C s .)As promised, in the case that A is c.e., c pAq is the strongest cost function that A obeys.In particular, c pAq Ñ c Ω .Proposition 5.2.Suppose that A is c.e.For any cost function c such that A |ù c, we have c pAq Ñ c.
Proof.After multiplying c by a constant, we may assume that cp0q ă 1{2.Fix a computable speed-up f such that c @ A f psq D ă 1{2 (again, see [27]).Define a Turing functional Γ such that at every stage t, µ `tY : A f ptq ae x `1 ă Γ Y t u ´EΓ,t ˘" cpx, tq, where E Γ,t " tY : Γ Y t lies to the left of A f ptq u.By a simple argument µpE Γ,t q ď c @ A f psq D ă 1{2 for every t, so this construction may proceed.Fix e with Φ e " Γ.Then ) ě 2 ´pe`1q cpxq.
We provide the promised applications to the ML-degrees.Corollary 5.3.(a) There is no minimal pair in the ML-degrees of K-trivials.(b) The ML-degree of a noncomputable K-trivial never contains a maximal Turing degree.
Proof.(a) Given noncomputable K-trivials A, B, let D be a noncomputable set obeying the cost function c pAq `cpBq .Then D ď ML A, B by Proposition 2.6.
(b) Suppose A is in the ML-degree.By Theorem 3.1 we may assume that A is c.e.Some c.e. K-trivial B ď T A obeys c pAq by [25, 5.3.22];then A ' B " ML A.
Recall that T pAq is the shift of A, which is obtained by deleting the first bit.Proof.The main idea is to enumerate the set A and the cost function c so that it has "sudden drops": numbers x with cpxq much smaller than cpx ´1q.
. .be a listing of all (possibly partial) computable enumerations.In particular, let xD n y be an effective listing of the finite sets, and let B e t`1 " B e t Y D ϕept`1q , where defined.At a stage s, we may declare cps ´1, sq ě α for some dyadic rational α, which by monotonicity entails that cpy, tq ě α for each y ă s and t ě s.At the end of stage s, we will define cpx, sq for every x ă s to be the least value consistent with all of our declarations and also with cpx, sq ě dpx, sq.
We must meet the global requirements that c has the limit condition and that A |ù c.We must also meet the requirements R e : T pAq " ď t B e t ñ cxB e t y ě 1.
The strategy for R e seeks to find an x and an s where cpx ´1, sq is large and x ´1 R B e s .Then it enumerates x into A and waits until it sees a t ą s with x ´1 P B e t .This will increase cxB e t y by at least cpx ´1, sq.Then the strategy seeks to repeat the process with a new x, continuing until cxB e t y ě 1.To ensure that c has the limit condition, we will give R e a bound α e beyond which it is not allowed to increase c.This bound will also ensure that R e does not interfere with R e 1 for e 1 ă e.To ensure that A |ù c, we will not allow R e to cause enumerations with total cost exceeding 2 ´e.Other than a discussion of α e , our full strategy for R e is: (1) Let s be the current stage.Declare cps ´1, sq ě α e .
(  4). ( 4) Wait until B e r converges for some r ą s with s ´1 P B e r .( 5) If cxB e t y r t"0 ě 1, terminate the strategy.Otherwise, return to Step (1).Note that case (3a) might occur because of the actions of some other strategy, or might instead occur because of cps, uq ě dps, uq.The latter can occur only finitely many times, because d satisfies the limit condition.
Note also that if we reach Step (5), then s´1 R B e s , s´1 P B e r , and cps´1, sq ě α e , so cxB e t y r t"0 ´cxB e t y s t"0 ě α e .Thus we will reach Step (5) at most 1{α e times before meeting the requirement and terminating the strategy.Each enumeration has a cost of 2 ´e ¨αe by construction, and so the total cost of enumerations by this strategy is at most 2 ´e.
If the strategy waits forever at Step (3), then either xB e t y is partial, or s R A but s ´1 P Ť t B e t , meaning we satisfy R e by negating the hypothesis.It thus remains only to show that we do not return to Step (1) via case (3a) or (3b) infinitely many times.
We wish to ensure that no R e 1 -strategy for e 1 ą e can increase cps, uq beyond 2 ´e ¨αe .So we define α 0 " 1, α e`1 " 2 ´e ¨αe .Now case (3a) cannot be caused by the action of any R e 1 -strategy for e 1 ą e.Nor can case (3b), because of our action at Step (2).It is then a simple induction that no strategy returns to Step (1) more than finitely many times.

K-trivial sets Turing below fragments of Ω
Previous research.We begin by discussing in some detail a theorem that motivated the present results.For a set Z Ď N thought of as a bit sequence and an infinite set R Ď N, we denote by Z R the sequence obtained by erasing the bits of Z in locations outside of R. If 1 ď j ď n, then the j th n-column of Z is Z pj´1`nNq .A set A is a k{n-base if it is computable from the join of any k of the n-columns of some random sequence X, in all possible ways.For a computable real p such that 0 ă p ď 1, let c Ω,p px, sq " pΩ s ´Ωx q p .As mentioned in the introduction, three of the authors of the present paper proved: Theorem 6.1 ( [14]).The following are equivalent for a set A and 1 ď k ă n: (1) A is a k{n-base.
(2) A is a k{n-base witnessed by Ω, i.e., it is computable from the join of any k of the n-columns of Ω. (3) A obeys c Ω,k{n .Hence the p-bases are characterised by cost functions.Theorem 4.3 implies that, for every rational p P p0, 1q, there is a smart p-base: a greatest ML-degree of pbases.If p ă q, then every p-base is also a q-base, as c Ω,p ě ˆcΩ,q .However there also is a q-base that is not a p-base.Thus, the smart p-bases form a dense chain of ML-degrees.
Using Theorem 6.6 below, we can add fourth equivalence to Theorem 6.1, one that appears to be significantly weaker than ( 2): (4) A is K-trivial and is computable from the join of some choice of k of the n-columns of Ω.In other words: if a K-trivial is computable from some k{n-fragment of Ω, then it is computable from any k{n-fragment of Ω. Recall that any c.e. set computable from a Turing incomplete random set is K-trivial [18].Since every k{n-fragment of Ω is incomplete, we obtain: Corollary 6.2.If X and Y are both k{n-fragments of Ω for k ă n, then X and Y compute the same c.e. sets.In particular, if a c.e. set is computable from one half of Ω, it is also computable from the other half.
The cost functions c R .We now turn to the general analysis of the question which K-trivials are computed by fragments of Ω.For n ě 1 and T Ď t1, 2, . . ., nu, let RpT, nq " Ť jPT j ´1 `nN.So Ω RpT,nq is the join of the n-columns of Ω indexed by T (up to a simple computable permutation, depending on how we take the join).
Let R be an infinite computable set.The first question is how to generalise the cost function c Ω,k{n to a cost function c Ω,R .A basic step in the analysis of k{n-bases was the observation that if T Ď t1, 2, . . ., nu has size k, then Ω RpT,nq is captured by a c Ω,k{n -test; this gave the implication (3)Ñ(2) of Theorem 6.1.We would like to capture the bits of Ω R that are given by Ω ae n by the n th component of a c Ω,R -test.So perhaps the first guess would be to define c Ω,R pnq " pΩ´Ω n q |RXn|{n .It turns out that this is not quite right; it works if R " RpT, nq, but that is misleading because in that case the density of initial segments of R is more or less constant k{n.What would work is c Ω,R pnq " pΩ ´Ωn q |RXkpnq|{kpnq , where Ω ´Ωn P p2 ´kpnq´1 , 2 ´kpnq s.However, it is not clear that this cost function will be monotonic if the density of R varies.We get around this technical complication by using a "discrete" version.
Using this language, (i) in Theorem 6.6 states that Ω S ď ML ˚ΩR .The equivalence (i)Ø(iii) in 6.6 provides a complete characterisation of ML ˚-reducibility between fragments of Ω by a simple combinatorial condition on the underlying computable sets.The intuition is that as R gets thinner, Ω R gets computationally weaker (in the coarse sense of ML ˚).The randomness enhancement principle says that among ML-random sets, being computationally weaker is equivalent to being more random (see [26] and the discussion there).By this principle, Ω R also gets more random as R gets thinner.
Due to the relative length of the proof, we state and prove the implication (ii)Ñ(iii) of Theorem 6.6 separately.Proposition 6.8.Let R and S be infinite computable sets such that |S X m| ę `|R X m|, i.e., the function Proof.Suppose for a contradiction that Ω R can be captured by a c Ω,S -test.Then using Proposition 6.5 there is a nested test xU n y capturing the sequence Z " Ω R ' Ω R A with µpU n q ď pc Ω,S pnqq ¨pc Ω,R A pnqq.We show that this implies that Z is not ML-random.To do this, we show how to uniformly enumerate an open set V of small measure that contains Z.
Note that |S X m| ´|R X m| " |S X m| `|R A X m| ´m.Given a rational ε ą 0, we can thus effectively find a k with |S X k| `|R A X k| ´k ą 1 ´log ε.
Define a location n s ą k recursively at stages s ě k.Recall that k s pnq " t´log 2 pΩ s ´Ωn qu.For s " k, or if k s`1 pn s q ă k, we let n s`1 be the least n ě k such that k s`1 pnq ą k.Otherwise, we let n s`1 " n s .
Let V " Ť sąk U ns,s .For each stage s ě k, we have k s pn s q ě k, so µpU ns,s q ď pc Ω,S pn s , sqq ¨pc Ω,R A pn s , sqq " The hardest implication is (iv)Ñ(i).To prove it, we rely on a lemma of interest on its own.Informally, the lemma says that if X ' Y is ML-random, but X is not too random in the sense that X fails a c Ω,R A -test for a co-infinite computable set R, then any K-trivial Turing below the "other side" Y obeys the complementary cost function c Ω,R .For example, let X " Ω R A and Y " Ω R , so that in addition Y fails a c Ω,R -test.In this case, any K-trivial set A obeying c Ω,R is below Y ; the lemma says that these are the only K-trivials below Y .(See Theorem 7.1.)Lemma 6.9.Let R Ď ω be computable and co-infinite.Suppose that X ' Y is ML-random, and that X is captured by a c Ω,R A -test.Suppose that A is K-trivial, and that A ď T Y .Then A obeys c Ω,R .
The proof will be the content of Section 8. Remark 6.10.While c Ω,R was only defined for infinite R, the definition can be interpreted for finite R, in which case the cost function does not satisfy the limit condition, and the sets obeying it will be the computable ones.Lemma 6.9 holds for R finite or co-finite as well.The case |R A | ă 8 tells us nothing: the hypothesis that X fails a c Ω,R A -test is trivial, since there is such a test that captures the entire interval; meanwhile, c Ω,R " ˆcΩ , so the conclusion A |ù c Ω,R is simply a restatement of the fact that A is K-trivial.
The case |R| ă 8 is a weaker version of a known result: the assumption that X fails a c Ω,R A -test tells us that X is not c Ω -random, and thus that X is LR-hard [3, Thm.1.5].Since Y is X-random, Y is 2-random, and so the only K-trivials that Y computes are the computable sets.
Proof of Theorem 6.6, assuming Lemma 6.9.(i)Ñ(ii) Let A be smart for c Ω,S by Theorem 4.3.Thus A |ù c Ω,S .Note that Ω S is captured by a c Ω,S -test by Proposition 6.5.By Proposition 2.6, A ď T Ω S , and so A ď T Ω R .By the definition of smartness for cost functions, Ω R is captured by a c Ω,S -test.
it follows that c Ω,S A ď ˆcΩ,R A .By Proposition 6.5, X :" Ω S A is captured by a c Ω,S Atest, which hence is a c Ω,R A -test.By Lemma 6.9, for any K-trivial A ď T Ω S :" Y , A |ù c Ω,R .Thus A ď T Ω R by Proposition 2.6. 6.6 7. Feeble sets for cost functions, and the structure of ML-degrees We discuss some ramifications of Theorem 6.6, and also Lemma 6.9, which is a main technical ingredient to the proof of the theorem.
A notion that is dual to smartness for a cost function.We will call an ML-random failing a c-test feeble for c if the only K-trivials it computes are the ones that obey c.Recall that a K-trivial set obeying a cost function c is smart for c if only the ML-random sets that fail a c-test compute it (Def.2.7).Feebleness for c is dual to smartness for c: in each case, by Proposition 2.6 the definition says that the collection of sets of the "opposite" type that are Turing comparable to the given set is as small as possible.As a consequence of Lemma 6.9, we obtain a natural characterisation of the K-trivial sets that are Turing below Ω R for some infinite computable set R. Theorem 7.1.If R is an infinite computable set, then Ω R is feeble for c Ω,R .(So the K-trivials computable from Ω R are exactly those that obey c Ω,R .)Proof.By Proposition 6.5, Ω R fails a c Ω,R -test.
Suppose A ď T Ω R :" Y and A is K-trivial.The sequence Ω R ' Ω R A is MLrandom and X :" Ω R A fails a c Ω,R A -test, so A |ù c Ω,R by Lemma 6.9.
We make two observations that follow from the definitions of smartness and feebleness via Proposition 2.6.They tell us that cost functions that admit feeble sequences are special.The first observation implies that if c has a feeble random sequence, then the collection of sets that obey c determines a principal ideal of ML-degrees.
To see that the map is a partial order embedding, first suppose that F Ď G. Then RpF q Ď RpGq, and so c Ω,RpF q ě c Ω,RpGq ; by Proposition 7.3, B F ď ML B G .
On the other hand, if F Ę G, take some n P F G; so R n Ď RpF q but R n X RpGq " H.The fact that the upper density of R n is 1 implies that |RpF q X m| ę `|RpGq X m|.By Theorem 6.6, this implies that c Ω,RpF q Ñ c Ω,RpGq .Hence B F ę ML B G by Theorem 7.1 and Proposition 7.3.
We obtain a related structural result about the ML-degrees without using the tools developed above.We give an alternative construction of incomparable MLdegrees, and use it to prove downward density.Proof.We extend Kučera's injury-free proof [20] of the Friedberg-Muchnik theorem, as presented in [25,Section 4.2].The theorem states that there are Turing incomparable c.e. sets A, B. Two versions of Kučera's proof are given there; the first relies on t0, 1u-valued d.n.c.functions as in [25,Cor. 4.2.3], the second on MLrandomness as in [25,Cor. 4.2.5].The second version actually shows that there are ML-random ∆ 0 2 sets Y, Z such that A ď T Y , B ď T Z, A ď T Z, and B ď T Y .Therefore A | ML B as witnessed by Y, Z.
To ensure that A, B ď T D, all we need to do is modify [25, Cor.4.2.5]: Lemma 7.6.There is a computable function r such that for each e, if Y " Φ H 1 e is total and ML-random, then A " W rpeq ď wtt Y , A ď T D, and A is non-computable.
To see this, we use the cost function version of Kucera's result as presented in [15] and [25, 5.3.13].Given an ML-random ∆ 0 2 set Y , one defines a cost function c Y such that if A |ù c Y , then A ď wtt Y .The cost function c Y emulates a given computable approximation of Y , and is therefore obtained uniformly from an e such that Y " Φ H 1 e .The construction of a non-computable c.e. set A obeying a given cost function with the limit condition [27, Thm.2.7(i)] is compatible with simple permitting, so we can ensure that A ď T D. It is also uniform in the cost function (when D is fixed).So we obtain the c.e. set A uniformly in e, as required.
Remark 7.7.The following may be relevant towards Question 1.2 above.We distinguish ď ML from certain variants that are clearly arithmetical.Fix a notation η for an infinite computable ordinal.Intuitively, a ∆ 0 2 set is η-c.a.if it has a computable approximation where the number of changes is bounded by counting downward in the canonical computable well-order given by η.See e.g.[12,Def. 7.1] for the formal definition of η-c.a.sets and more background.
Restricting the ML-randoms in the definition 2.1 of ď ML to the η-c.a.sets yields a reducibility strictly weaker than ď ML .For, the η-c.a.sets form a Σ 0 3 class, so there is a noncomputable c.e. set D below all the η-c.a.ML-randoms.Now by Theorem 7.5, let A, B ď T D be c.e. sets such that A | ML B. Then A and B are equivalent in the sense of the reducibility based on η-c.a.sets.Note that by the proof of Theorem 7.5, in fact A and B are incomparable for the weaker variant of ML-reducibility based on ML-random ∆ 0 2 sets.
8. Proof of Lemma 6.9 Recall that for A Ď 2 ω and Z P 2 ω , the Lebesgue (binary) lower density ̺ 2 pA|Zq of A at Z is lim inf n µpA|Z ae nq, where µpA|σq " µpAXrσsq{µprσsq is the conditional probability of A given rσs.Notice that ̺ 2 p2 ω ˆA|X ' Zq " ̺ 2 pA|Zq for any bit sequence X.
A difference test is one of the form xU n X Py, where the open sets U n are uniformly Σ 0 1 and nested, P is Π 0 1 , and µpU n X Pq ď 2 ´n.Franklin and Ng [10] proved that an ML-random sequence Z is difference random (i.e., passes all difference tests) if and only if Z is Turing incomplete.
A bit sequence Z is a positive density point if the lower density ̺ 2 pP|Zq is positive for any Π 0 1 class P that contains Z (If Z is ML-random, then it makes no difference whether one takes the binary, or the full density defined in the setting of the unit interval.) We also require a result from [4] due to Bienvenu, Hölzl and two of the authors of the present paper.It implies that an ML-random is difference random if and only if it is a positive density point.Furthermore, the failure of these properties will be witnessed on the same Π 0 1 classes, an observation we will use below.Fact 8.1 ([4], Lemma 3.3).Suppose that Q is a Π 0 1 -class that contains an MLrandom sequence Z.Then Z fails a difference test of the form xV n X Qy iff Q has lower density 0 at Z.
The purpose of this section is to prove Lemma 6.9, which we recall here.We will first give a proof in the case that A is c.e. Lemma 6.9.Let R Ď ω be computable and co-infinite.Suppose that X ' Y is ML-random, and that X is captured by a c Ω,R A -test.Suppose that A is K-trivial, and that A ď T Y .Then A obeys c Ω,R .
Proof when A is c.e. Fix a computable enumeration xA s y of A. Fix a c Ω,R A -test xU n y that X fails, and fix a functional Φ with A " Φ Y .Let E be the error set for Φ with respect to A: as before, E s is the set of oracles Z such that Φ Z s lies to the left of A s .Let Q " 2 ω ˆp2 ω ´Eq (and Q s " 2 ω ˆp2 ω ´Es q).
We carry out a "ravenous sets" construction on Q; for background see [14,Section 3.1].Uniformly in k, n P ω, we enumerate Σ 0 1 open sets V k n Ă 2 ω ˆ2ω .The goal for V k n X Q is 2 ´kpΩ n`1 ´Ωn q; we will ensure that no set ever exceeds its goal.In [18] a set playing a role similar to the one of V k n was called "hungry" if it has not reached its goal.The sets V k n are called "ravenous" here, rather than just "hungry", because we may feed them with oracle strings that later leave Q, in which case they get hungry again.
We will also ensure that V k n is disjoint from V k m for n ‰ m.The parameter k determines the goal for these ravenous sets; otherwise, the constructions for distinct k are independent.The other property that we ensure is that Construction of the sets V k n , for parameter k.At stage 0, we begin with V k n empty for every n P ω.At every stage s, we call one of the sets V k n "awake", and the others "asleep".We start with V k 0 awake.At stage s, if V k n is awake at this stage, then it has not reached its goal, i.e., µpV k n,s X Q s q ă 2 ´kpΩ n`1 ´Ωn q, and so we try to feed it.We call a product rσs ˆrτ s of basic clopen sets palatable (at stage s) if: it is disjoint from V k m,s for all m; rσs Ď U n,s ; and A s ae n `1 ď Φ τ s (in particular, rσs ˆrτ s is covered by Q s ).Note that if rσ 1 s ˆrτ 1 s is contained in a palatable set, it is itself palatable.By standard assumptions on the enumerations of Φ and U, we can effectively obtain a finite antichain of palatable sets covering all palatable sets, and so determine the total measure covered by palatable sets.If this measure is less than the appetite of V k n , i.e. less than 2 ´kpΩ n`1 ´Ωn q ´µpV k n,s X Q s q, we enumerate all the palatable sets into V k n,s`1 and declare that V k n to be awake at stage s `1.If instead the measure covered by palatable sets at stage s exceeds the appetite of V k n , then we choose a finite anti-chain of palatable sets covering measure exactly 2 ´kpΩ n`1 ´Ωn q´µpV k n,s XQ s q in some effective fashion and enumerate this antichain into V k n,s`1 .We then put V k n to sleep and declare V k m to be awake at stage s `1, where m is least such that V k m,s has not reached half its goal: µpV k m,s X Q s q ă 2 ´pk`1q pΩ n`1 ´Ωn q. (Such m will always exist, of course, because all but finitely many V k m,s will be empty.It is important to note, though, that as we enumerate measure into V k n , measure leaves Q, and so a set V k n could be put to sleep but re-awakened later.) Verification.Since X fails a c Ω,R A test, it is not 2-random.Since X is ML-random relative to Y , this implies that Y is Turing incomplete.So by the result of Franklin and Ng, Y is difference random.Since Y R E, by Fact 8.1 2 ω ´E has positive density at Y .Hence Q has positive density at X ' Y by the fact mentioned at the beginning of this section.Let n for any n P ω.In the remainder of the proof, we omit the superscript k.We first show that every set V n eventually reaches half its goal.Claim 6.9.1.For every n, there is a stage t such that for all s ě t, µpV n,s X Q s q ě 2 ´pk`1q pΩ n`1 ´Ωn q.
Proof.Fix n.There is a σ ă X with rσs Ď U n , and there is a τ ă Y with A ae n`1 ď Φ τ .Fix t 0 such that rσs Ď U n,t0 , A t0 ae n `1 " A ae n `1 and A ae n `1 ď Φ τ t0 .As each V m,s is the union of a finite antichain and does not contain pX, Y q, at every s ě t 0 at which V n is awake, there is some palatable rσ 1 s ˆrτ 1 s with σ ď σ 1 ă X and τ ď τ 1 ă Y .We do not enumerate some neighborhood covering rσ 1 s ˆrτ 1 s into V n,s`1 , so by construction, V n is asleep at stage s `1.
If s 0 is a stage when V n goes to sleep and s 1 ą s 0 is a stage at which V n wakes back up, then µpQ s0 ´Qs1 q ą 2 ´pk`1q pΩ n`1 ´Ωn q.Thus V n can go to sleep only finitely often.It follows that for every n, there are only finitely many stages at which V n is awake.Let t be the last stage at which any V m for m ď n went to sleep.Then µpV n,s X Q s q ě 2 ´pk`1q pΩ n`1 ´Ωn q for every s ě t.For otherwise, when the current V j goes to sleep, either V n or V m for m ă n would wake, contrary to the choice of t. 6.9.1 We now define a pair of computable functions f and g by simultaneous recursion.We begin by setting f p´1q " ´1.Given f ps ´1q, we define f psq ą f ps ´1q and gpsq to be sufficiently large so that for every n ă s, Ω f psq ´Ωn ď 2pΩ gpsq ´Ωn q, and for every n ă gpsq, µpV n,f psq X Q f psq q ě 2 ´pk`1q pΩ n`1 ´Ωn q.
Note such values always exist: if gpsq is such that Ω ´Ωs ď 2pΩ gpsq ´Ωs q, then the first requirement is satisfied for every f psq; then given any gpsq, a sufficiently large choice of f psq will satisfy the second requirement.Thus we can find such a pair of values by exhaustive search, and f and g are total.
Recall the following notation from Section 6: k s pnq " t´log 2 pΩ s ´Ωn qu .
Observe that k f psq pnq ě k gpsq pnq ´1 for all n ă s, and so c Ω,R A pn, f psqq ď 2 ¨cΩ,R A pn, gpsqq.
The following claim will complete the proof that A obeys c Ω,R .
Claim 6.9.2.The total cost c Ω,R @ A f ps`1q D is bounded by 2 k`3 .
Proof.Fix a stage s, and suppose that n be least such that n P A f ps`1q ´Afpsq .We may assume n ă s.Then for all m ě n, π 2 rV m,f psq s Ď E f ps`1q , where π 2 : 2 ω ˆ2ω Ñ 2 ω is the projection onto the second coordinate.Let Note that by definition of the Q t (6.1) µ `Efps`1q ´Efpsq ˘ě µpπ 2 rSsq.
On the other hand, π 1 rSs Ď U n,f psq , where π 1 is projection onto the first coordinate, and µpU n,f psq q ď c Ω,R A pn, f psqq ď 2 ¨cΩ,R A pn, gpsqq.Since S Ď π 1 rSs ˆπ2 rSs, Therefore, by (6.1), µpE f ps`1q ´Efpsq q ě 2 ´k´3 ¨cΩ,R pn, gpsqq ě 2 ´k´3 ¨cΩ,R pn, sq.By definition the total cost is the sum over all stages s of the costs of the least change at that stage.We conclude that c Ω,R @ A f ps`1q D ď 2 k`3 µpEq.
We lift the c.e. case to the general case.Thanks to Theorem 3.1, this is relatively easy, say compared to the approach taken in [14].Lemma 8.2.For any infinite computable set R, obedience to c Ω,R is downward closed under Turing reducibility.
Proof.This is similar to [14,Prop. 2.3].For brevity, let f pkq " |R X k|.We only use the facts that f is non-decreasing, and that there is d P N such that f pk `1q ď f pkq `d for each k.As in Section 6 let kpnq " t´log 2 pΩ ´Ωn qu and k s pnq " t´log 2 pΩ s ´Ωn qu, so that lim s k s pnq " kpnq in a nonincreasing fashion.
Let B be a ∆ 0 2 set that obeys c Ω,R .Let xB t y be a computable approximation of B witnessing that B |ù c Ω,R .Let A ď T B, say A " Ψ B for some functional Ψ.
Hence there is b P Z such that kpnq ě b `kpψpnqq for each n.We define an increasing sequence of stages spiq, starting with sp0q " 0; spiq is the least stage s ą spi ´1q such that |Ψ Bs s | ą i and for all n ď i, k i`1 pnq ě b `ks pψ s pnqq, where ψ s pnq is the use of the computation Ψ Bs s pnq.We then let A i " Ψ B spiq spiq .We claim that the approximation xA i y witnesses that A obeys c Ω,R .The reason is that if A i pnq ‰ A i`1 pnq and n ď i, then the A-cost paid is 2 ´f pki`1pnqq , whereas at some stage t P pspiq, spi `1qs we see a change in B below v " ψ spiq pnq, showing that the total cost paid by B along this interval of stages is at least 2 ´f pktpvqq ě 2 ´f pk spiq pvqq .This allows us to bound the A-cost by the assumed property of f , as k i`1 pnq ě b `kspiq pvq.
Note that in fact obedience to c Ω,R is downward closed under ML-reducibility (by Theorem 7.1), but this used Lemma 6.9.
Proof of Lemma 6.9 in the general case.Let A be K-trivial and suppose that the hypotheses of the lemma hold.By Theorem 3.1, let C ě T A be c.e. such that C " ML A. The ML-equivalence implies that C ď T Y ; the c.e. case shows that C |ù c Ω,R .By Lemma 8.2, A obeys c Ω,R as well.

Fragments of Ω and strong jump-traceability
A cost function c is benign [15] if from a rational ε ą 0, we can compute a bound on the length of any sequence n 1 ă s 1 ď n 2 ă s 2 ď ¨¨¨ď n ℓ ă s ℓ such that cpn i , s i q ě ε for all i ď ℓ.For example, c Ω is benign, with the bound being 1{ε.
By an order function we mean a computable, non-decreasing, and unbounded function.A set A is strongly jump-traceable if for every order function h, for every ψ partial computable in A, there is an h-bounded c.e. trace for ψ; that is, there is a sequence xT pnqy such that |T pnq| ď hpnq, T pnq is uniformly c.e., and ψpnq P T pnq for all n P dom ψ.Figueira et al. [9] introduced this notion, and built a noncomputable c.e. of this kind.
Plain (rather than strong) jump traceability is the notion where one existentially quantifies over order functions h.For c.e. sets this is equivalent to superlowness.Universally quantifying over order functions indeed places a very strong restriction on the computational power of the set.For instance, the strongly jump-traceable sets form an ideal in the Turing degrees [5,7] which is a proper sub-ideal of the K-trivials [8].A characterisation that will concern us here is that a set is strongly jump-traceable if and only if it obeys all benign cost functions [15,7].For more on strong jump-traceability, see the survey article [16].
One can characterise strong jump-traceability using computability from MLrandom sequences.There is more than one such characterisation.For example, a set is strongly jump-traceable if and only if it is computable from all superlow ML-random sequences [12], also if and only if it is computable from all superhigh random sequences [12,16].Alternatively, a c.e. set is strongly jump-traceable if and only if it is computable from a Demuth random sequence by combining [17] and [21]; this extends to all K-trivials by Theorem 3.1.As a consequence one has: Proposition 9.1.The strongly jump-traceable sets form an ideal in the ML-degrees.
In [14, 5.3.1], it is observed that every strongly jump-traceable set is a p-base for all p ą 0. However, this is not a characterisation.The sets that are p-bases for all p ą 0 are the 1{ω-bases, those which are computable from each column from an infinite partition of some random sequence.Equivalently, they are computable from Ω R for all computable sets R such that lim inf n |R X n|{n is positive.Some such sets are not strongly jump-traceable as pointed out in [14, 5.3.1].Here we see that we obtain a characterisation of strong jump-traceability if we drop the density condition.Proposition 9.2.For any infinite computable set R, c Ω,R is benign.
Proof.Given a rational ε ą 0, first, we compute an m with 2 ´|RXm| ă ε.Let n 1 ă s 1 ď n 2 ă s 2 ă ¨¨¨ď n ℓ ă s ℓ be a sequence such that for all i ď ℓ, c Ω,R pn i , s i q ą ε.This means that k si pn i q ă m, and so Ω si ´Ωni ě 2 ´m.So and thus ℓ ă 2 m .Proposition 9.3.For any benign cost function c, there is an infinite computable set R with c Ω,R Ñ c.
Proof.Suppose gpεq is a computable bound witnessing that c is benign.We will construct a left-c.e.real β ă 1.Since Ω is Solovay complete, by the recursion theorem, we may assume that we already know a constant δ ą 0 and a computable approximation to Ω with δpβ s ´βn q ă Ω s ´Ωn for all n and s.Choose a computable sequence m 0 ă m 1 ă ¨¨¨such that ÿ i 2 ´mi ¨g `2´pi`1q δ ă 1.
Proof.We show that cpn, sq ď c Ω,R pn, sq for and s and all n ă s.Suppose this holds for s; we verify it for s `1.Suppose there is n ď s, chosen least, such that cpn, s `1q ą c Ω,R pn, sq (otherwise there is nothing to do for s `1).For all n ă n, cpn, s `1q ď c Ω,R pn, sq ď c Ω,R pn, s `1q.
Definition 2.1 ([3]).For sets A and B, we write A ď ML B if B ď T Y implies A ď T Y for every ML-random sequence Y .Cost functions were introduced in [25, Section 5.3] and developed further in [15, 27].Definition 2.2.A cost function is a computable function c : N ˆN Ñ tr P R : r ě 0u.

Theorem 5 . 4 .
For every cost function d there is a cost function c ě d and a c.e. set A such that A |ù c and T pAq * c.Since c pAq Ñ c, this shows that T pAq * c pAq .Thus c pT pAqq Ñ c pAq .In contrast, for each ML-random Y , Y fails some c pAq test ô Y ě T A ô Y ě T T pAq ô Y fails some c pT pAqq -test (see Definition 2.5 for c-tests).So we have a pair of inequivalent cost functions that determine the same randomness notion.

Theorem 7 . 5 .
For every non-computable c.e. set D, there are c.e. sets A, B ď T D such that A | ML B.
Now let A t " 9A qptq , so n s " m s .Since s ď qps ´1q, if n s ă s then the inner summation above is at least c xAty pn s , sq.So ř s c xAty pn s , sq ă 8, as desired.
τ PBs rτ s " U A ae ns`1 rss, and define S " Ť s B s .Note that ÿ s µpU A ae ns`1 rssq " ÿ s c xAty pn s , sq ă 8.