Leavitt path algebras: the first decade

The algebraic structures known as {\it Leavitt path algebras} were initially developed in 2004 by Ara, Moreno and Pardo, and almost simultaneously (using a different approach) by the author and Aranda Pino. During the intervening decade, these algebras have attracted significant interest and attention, not only from ring theorists, but from analysts working in C$^*$-algebras, group theorists, and symbolic dynamicists as well. The goal of this article is threefold: to introduce the notion of Leavitt path algebras to the general mathematical community; to present some of the important results in the subject; and to describe some of the field's currently unresolved questions.

Our goal in writing this article is threefold: first, to provide a history and overall viewpoint of the ideas which comprise the subject of Leavitt path algebras; second, to give the reader a general sense of the results which have been achieved in the field; and finally, to give a broad picture of some of the research lines which are currently being pursued. The history and overall viewpoint (Section 1) are presented with a completely general mathematical audience in mind; the writing style here will be more chatty than formal. Our description of the results in the field has been split into two pieces: we describe Leavitt path algebras of row-finite graphs (Sections 2, 3, and 4), and subsequently discuss various generalizations of these (Section 5). Our intent and hope in ordering the presentation this way is to allow the non-expert to appreciate the key ideas of the subject, without getting ensnarled in the at-first-glance formidable constructs which drive the generalizations. We close with Section 6, in which we describe some of the current lines of investigation in the subject. In part, our hope here is to attract mathematicians from a wide variety of fields to join in the research effort. The exhilarating increase in the level of interest in Leavitt path algebras during the first decade since their introduction has resulted in the publication of scores of articles on these and related structures. Certainly it is not the goal of the current article to review the entirety of the literature in the subject. Rather, we have tried to strike a balance between presenting enough information to make clear the beauty and diversity of the subject on the one hand, while avoiding "information overload" on the other. Apologies are issued in advance to those authors whose work in the field has consequently not been included herein.
In keeping with our goal of making this article accessible to a broad audience, we will offer either a complete, formal Proof, or an intuitive, informal Sketch of Proof, only for specific results for which such proofs are particularly illuminating. In other situations we will simply present statements without proof. Appropriate references are provided for all key results.
Acknowledgments: This work was partially supported by Simons Foundation Collaboration Grant #20894. The author thanks P.N.Ánh, Pere Ara, Zachary Mesyan, Paul Muhly, Enrique Pardo, and Mark Tomforde for helping to clarify some of the historical aspects of the subject. The author is extremely grateful to S. Paul Smith for providing a summary of germane ideas related to noncommutative algebraic geometry, and for a number of suggestions which clarified the presentation. Finally, the author warmly thanks the referee for providing significant, valuable feedback on the original version of this article.

History and overview
The fundamental examples of rings that are encountered during one's algebraic pubescence (e.g., fields K, Z, K[x], K[x, x −1 ], M n (K)) all have the following property.
Definition 1.1. The unital ring R has the Invariant Basis Number (IBN) property in case, for each pair i, j ∈ N, if the left R-modules R i and R j are isomorphic, then i = j.
A wide class of rings can easily be shown to have the IBN property, including rings possessing any sort of reasonable chain condition on one-sided ideals, as well as commutative rings. But there are naturally occurring examples of algebras which are not IBN.
Let V be a countably infinite dimensional vector space over a field K, and let R denote End K (V ), the algebra of linear transformations from V to itself. It is not hard to see that R i ∼ = R j as left (or right) R-modules for any pair i, j ∈ N, as follows. One starts by viewing R as the K-algebra RFM N (K) consisting of those N×N matrices M having the property that each row of M contains at most finitely many nonzero entries. (In this context we view transformations as acting on the right, and define composition of transformations by setting f • g to mean "first f , then g". Of course, depending on the reader's tastes, the same analysis can be performed by considering the analogous algebra CFM N (K) of column-finite matrices.) Then a left-module isomorphism R R → R R 2 is easy to establish, by considering the map that associates with any row-finite matrix M the pair of rowfinite matrices (M 1 , M 2 ), where M 1 is built from the odd-indexed columns of M, and M 2 from the even-indexed columns. Once such an isomorphism R R ∼ = R R 2 is guaranteed, then by using the obvious generalization of the observation The following is easy to prove. In effect, the ring R = End K (V ) lies on the complete opposite end of the spectrum from the IBN property, in that every pair of finitely generated free left R-modules are isomorphic; such a ring is said to have the Single Basis Number (SBN) property. The question posed (and answered completely) by William G. Leavitt in the early 1960's regards the existence of a middle ground between IBN and SBN: do there exist rings for which R R i ∼ = R R j for some, but not all, pairs i, j ∈ N? If we assume an isomorphism exists between R R i and R R j for some pair i = j, then clearly by appending k copies of R to this isomorphism we get that R R i+k ∼ = R R j+k . With this idea in mind, and using only basic properties of the semigroup N × N, it is easy to prove the following. Lemma 1.3. Let R be a unital ring. Assume R is not IBN, i.e., that there exist i = j ∈ N with R R i ∼ = R R j . Let m be the least integer for which R R m ∼ = R R j for some j = m. For this m, let n denote be the least integer for which R R m ∼ = R R n and n > m. Let k denote n − m ∈ N. Then for any pair i, j ∈ N, We call the pair (m, n) the module type of R. (We caution that some authors, including Leavitt, instead use the phrase module type to denote the pair (m, k).) In particular, the ring R = End K (V ) has module type (1,2).
With Lemma 1.3 in mind, Leavitt proved the following "anything-that-can-happen-actually-doeshappen" result. Theorem 1.4. (Leavitt's Theorem) [76,Theorem 8] Let m, n ∈ N with n > m, and let K be any field. Then there exists a K-algebra L K (m, n) having module type (m, n). Additionally: (1) L K (m, n) is universal, in the sense that if S is any K-algebra having module type (m, n), then there exists a nonzero K-algebra homomorphism ϕ : L K (m, n) → S.
(3) L K (m, n) is explicitly described in terms of generators and relations.
We will refer to L K (m, n) as the Leavitt algebra of type (m, n).
Since the ring R = RFM N (K) ∼ = End K (V ) has module type (1,2), by Lemma 1.2 there necessarily exists a set of four elements in R which satisfy the appropriate relations; for clarity, we note that one such set is given by ), X 1 = (δ 2i−1,i ), and X 2 = (δ 2i,i ).
Moreover, the subalgebra of RFM N (K) generated by these four matrices is isomorphic to L K (1,2).
In the specific case when m = 1, the explicit description of the algebra L K (1, n) given in [76] yields that L K (1, n) is the free associative K-algebra K x 1 , . . . , x n , y 1 , . . . , y n modulo the relations given in ( †). So, with Lemma 1.2 in mind, we may view L K (1, n) as essentially the "smallest" algebra of type (1, n).
In the following subsection we will rediscover the algebras L K (1, n) from a different point of view.
We begin this subsection with a well-known idea, perhaps cast more formally than is typical. Let P be a finitely generated projective left R-module. (The notions of "finitely generated" and "projective" are categorical, and make sense even in case R is nonunital.) For any idempotent e ∈ R, [Re] ∈ V(R); indeed, elements of V(R) of this form will play a central role in the subject. If R is a division ring, then V(R) ∼ = Z + ; the same is true for R = Z, as well as for various additional classes of rings. The wide range of monoids which can arise as V(R) will be demonstrated in Theorem 1.6. For an arbitrary ring R, it's fair to say that an explicit description of V(R) is typically hard to come by. The well-studied Grothendieck group K 0 (R) of R is precisely the universal abelian group corresponding to the commutative monoid V(R).
When R is unital then [R] ∈ V(R). Key information about R may be provided by the pair (V(R), [R]). If R and S are isomorphic as rings then there exists an isomorphism of monoids ϕ : In addition to the commutativity of (V(R), ⊕), the monoid V(R) has the following two easy-to-see properties. First, V(R) is conical: if x, y ∈ V(R) have x ⊕ y = 0, then x = y = 0. Second (for R unital), V(R) contains a distinguished element d: for each x ∈ V(R), there exists y ∈ V(R) and n ∈ N having x ⊕ y = nd (specifically, d = [R]). In 1974, George Bergman established the following remarkable result. 3) The construction of B(M, d) depends on the specific representation of M as F / R , where F is a finitely generated free abelian monoid, and R is a given (finite) set of relations in F . With F and R viewed as starting data, the algebra B = B(M, d) = B(F / R , d) is constructed explicitly via a finite sequence of steps, where each step consists of adjoining elements satisfying explicitly specified relations (provided by R) to an explicitly described algebra.
(3) Let n ∈ N, n ≥ 2. Let V n denote the free abelian monoid having a single generator x, subject to the relation nx = x. So V n = {0, x, 2x, . . . , (n − 1)x}, |M n | = n, and (M n , x) clearly satisfies the hypotheses of Bergman's Theorem. (In [49], the semigroup V n is denoted V 1,n .) In this situation, Bergman's explicit construction yields that B(V n , x) = K x 1 , . . . , x n , y 1 , . . . , y n , with relations given by exactly the same defining relations as given in ( †) above, namely, Consequently, as observed in [49,Theorem 6.1], the Bergman algebra B(V n , x) is precisely the Leavitt algebra L K (1, n).
Because of the central role they play in both the genesis and the ongoing development Leavitt path algebras, no history of the subject would be complete without a discussion of graph C * -algebras. We present here only the most basic description of these algebras, just enough so that even the reader who is completely unfamiliar with them can get a sense of their connection to Leavitt path algebras. Throughout this subsection all algebras are assumed to be unital algebras over the complex numbers C (but most of these ideas can be cast significantly more generally). The algebra A is a * -algebra in case there is a map * : A → A which has: (x + y) * = x * + y * ; (xy) * = y * x * ; 1 * = 1; (x * ) * = x; and (α · x) * = αx * for all x, y ∈ A and α ∈ C, where α denotes the complex conjugate of α. Standard examples of * -algebras include matrix rings M n (C) (where * is 'conjugate transpose'), and the ring C(T) of continuous functions from the unit circle A C * -norm on a * -algebra A is a function · : A → R + for which: ab ≤ a · b ; a+b ≤ a + b ; aa * = a 2 = a * 2 ; a = 0 ⇔ a = 0; and λa = |λ| a for all a, b ∈ A and λ ∈ C. For A = M n (C), a C * -norm on A is given by operator norm, where we view elements of M n (C) as operators C n → C n , with the Euclidean norm on C n . (This operator norm assigns to M ∈ M n (C) the square root of the largest eigenvalue of the matrix M * M.) A C * -norm on C(T) is also given by an operator norm.
A C * -norm on a * -algebra A induces a topology on A in the usual way, by defining the ǫ-ball around an element a ∈ A to be {b ∈ A | b − a < ǫ}. Definition 1.8. A C * -algebra is a * -algebra A endowed with a C * -norm · , for which A is complete with respect to the topology induced by · .
A second description of a C * -algebra, from an operator-theoretic point of view, is given here. Let H be a Hilbert space, and let B(H) denote the continuous linear operators on H. A C * -algebra is an adjoint-closed subalgebra of B(H) which is closed with respect to the norm topology on B(H). In general, and especially relevant in the current context, one often builds a C * -algebra by starting with a given set of elements in B(H), and then forming the smallest C * -subalgebra of B(H) which contains that set.
A partial isometry is an element x in a C * -algebra A for which y = x * x is a self-adjoint idempotent; that is, in case y * = y and y 2 = y. Such elements are characterized as those elements z of A for which zz * z = z in A. For instance, in M n (C), any element which is the sum of distinct matrix units e i,i (1 ≤ i ≤ n) is a partial isometry (indeed, a projection); there are other partial isometries in M n (C) as well. Since the only idempotents in C(T) are the constant functions 0 and 1, it is not hard to show that the set of partial isometries in The study of C * -algebras has its roots in the early development of quantum mechanics; these were used to model algebras of physical observables. Various questions about the structure of C *algebras arose over the years. One of the most important of these questions, the explicit description of a separable simple infinite C * -algebra, was resolved in 1977 by Cuntz ([56, Theorem 1.12]). A C * -algebra is simple in case it contains no nontrivial closed two-sided ideals. (It can be shown that this is equivalent to the algebra containing no nontrivial two-sided ideals, closed or not.) A C * -algebra is infinite in case it contains an element x for which xx * = 1 and x * x = 1.
Cuntz' Theorem Let n ∈ N. Consider a Hilbert space H, and a set {S i } n i=1 of isometries (i.e., Then the infinite separable C * -algebra O n is simple.
Indeed, Cuntz proves much more in [56, Theorem 1.12] than we have stated here. Additionally, it is shown in [56,Theorem 1.13] that if X is any nonzero element in O n , then there exist A, B ∈ O n for which AXB = 1.
Cuntz notes that the condition n i=1 S i S * i = 1 implies that the S i S * i are pairwise orthogonal. So the C * -algebra O n is the C * -completion of a C-subalgebra T n of B(H), where T n as a C-algebra is generated by isometries Since a C * -algebra is adjoint-closed, we see that O n may also be viewed as the C * -completion of a C-subalgebra L n of B(H) generated by In retrospect, such a C-algebra L n is seen to be isomorphic to L C (1, n).
Subsequent to the appearance of [56], a number of researchers in operator algebras investigated natural generalizations of the Cuntz C * -algebras O n ; see especially [59]. In the early 1980's, various constructions of C * -algebras corresponding to directed graphs were studied by Watatani and others (e.g., [100]). Even though, via this approach, the Cuntz algebra O n could be realized as the C *algebra corresponding to the graph R n (see Example 2.13 below), this methodology did not gain much traction at the time. Instead, the study of these C * -algebras from a different point of view (arising from matrices with non-negative integer entries, or arising from groupoids) became more the vogue. But then, in the fundamental article [75] (in which groupoids are still in the picture, and the corresponding graphs could not have sinks), and the subsequent followup articles [74] and [47], the power of constructing a C * -algebra based on the data provided by a directed graph became clear.
, where E 0 and E 1 are sets (the vertices and edges of E, respectively), and s and r are functions from E 1 to E 0 (the source and range functions of E, respectively). A sink is an element v ∈ E 0 for which s −1 (v) = ∅. E is finite in case both E 0 and E 1 are finite sets. Definition 1.10. Let E be a finite graph. Let C * (E) denote the universal C * -algebra generated by a collection of mutually orthogonal projections {p v | v ∈ E 0 } together with partial isometries {s e | e ∈ E 1 } which satisfy the Cuntz-Krieger relations: (CK1) s * e s e = p r(e) for all e ∈ E 1 , and (CK2) p v = {e∈E 1 |s(e)=v} s e s * e for each non-sink v ∈ E 0 .
For example, in [74] the authors were able to identify those finite graphs E for which C * (E) is simple, and those for which C * (E) is purely infinite simple. (The germane graph-theoretic terms will be described in Notations 2.7 and 3.2 below. A unital C * -algebra A is purely infinite simple in case A ∼ = C, and for each 0 = x ∈ A there exist a, b ∈ A with axb = 1.) Theorem 1.11. (Simplicity and Purely Infinite Simplicity Theorems for graph C *algebras) Let E be a finite graph. Then C * (E) is simple if and only if the only hereditary saturated subsets of E are trivial, and every cycle in E has an exit. Moreover, C * (E) is purely infinite simple if and only if C * (E) is simple, and E contains at least one cycle.
Subsequently, in [47], results were clarified, sharpened, and extended; and the groupoid techniques were eliminated from the arguments.
During the same timeframe, Kirchberg (unpublished) and Phillips ([85]) independently proved a beautiful, deep result which classifies up to isomorphism a class of C * -algebras satisfying various properties. Although the now-so-called Kirchberg Phillips Theorem covers a wide class of C * -algebras, it manifests in the particular case of purely infinite simple graph C * -algebras as follows.
Theorem 1.12. (The Kirchberg Phillips Theorem for graph C * -algebras) Let E and F be finite graphs. Suppose C * (E) and C * (F ) are purely infinite simple. Suppose there is an isomorphism The work described in [47] became the basis of a newly-energized research program in the C *algebra community, a program which continues to flourish to this day. For additional information about graph C * -algebras, see [86]; for a more complete description of the history of graph C *algebras, see [99,Appendix B].
With the overview of Leavitt algebras, Bergman algebras, and graph C * -algebras now in place, we are in position to describe the genesis of Leavitt path algebras.
There are two plot lines to the history.
The first historical plot line begins with an investigation into the algebraic notion of purely infinite simple rings, begun by Ara, Goodearl, and Pardo (each of whom has significant expertise in both ring theory and C * -algebras) in [35]. In it, the authors "... extend the notion of a purely infinite simple C * -algebra to the context of unital rings, and study its basic properties, especially those related to K-theory".
The authors note in the introduction of [35] that "The Cuntz algebra O n is the C * -completion of the Leavitt algebra V 1,n over the field of complex numbers." Although this connection between the Cuntz and Leavitt algebras is now viewed as almost obvious, it was not until the early 2000's that such a connection was first noted in the literature. (A somewhat earlier mention of this connection appears in [3]; the observation in [3] was included at the request of an anonymous referee.) With the notion of purely infinite simple rings so introduced, the same three authors (together with González-Barroso) set out to find large classes of explicit examples of such rings. With the purely infinite simple graph C * -algebras as motivation, the four authors in [29] introduced the "algebraic Cuntz-Krieger (CK) algebras." (Retrospectively, these are seen to be the Leavitt path algebras corresponding to finite graphs having neither sources nor sinks, and which do not consist of a disjoint union of cycles.) These algebraic Cuntz-Krieger algebras arose as specific examples of fractional skew monoid rings, and the germane ones were shown to be purely infinite simple by using techniques which applied to the more general class.
With the K-theory of the corresponding graph C * -algebras in mind, it was then natural to ask analogous K-theoretic questions about the algebraic CK algebras. In addition, earlier work by Ara, Goodearl, O'Meara and Pardo [33] regarding semigroup-theoretic properties of V (e.g., separativity and refinement) for various classes of rings provided the motivation to ask similar questions about V(A) for these algebras.
Once various specific examples had been completely worked out, it became clear to Ara and Pardo that much of the information about the V-monoid of the algebraic CK algebras could be seen directly in terms of relations between vertices and edges in an associated graph E. Indeed, these relations between vertices and edges could be codified as information which could then be used to generate a monoid in a natural way, defined here. Definition 1.13. Let E be a finite graph, with E 0 = {v 1 , v 2 , . . . , v n }. The graph monoid M E of E is the free abelian monoid on a generating set {a v 1 , a v 2 , . . . , a vn }, modulo the relations In a private communication to the author, Enrique Pardo wrote that, with all this information and background as context, ... at some moment [early in 2004] one of us suggested that probably Bergman's coproduct construction would be a good manner of solving the computation and prove that both monoids coincide.
Once some additional necessary machinery was included (the notion of a complete subgraph), then Ara and Pardo, together with Pardo's colleague Mariangeles Moreno-Frías, had all the ingredients in hand to make the following definition, and prove the subsequent theorem, in [36]. (We state the definition and theorem here only for finite graphs; these results were established for more general graphs in [36], and the general version will be discussed below.) Definition 1.14. ( [36, p. 161]) Let E be a finite graph, and let K be a field. We define the graph K-algebra L K (E) associated with E as the K-algebra generated by a set {p v | v ∈ E 0 } together with a set {x e , y e | e ∈ E 1 }, which satisfy the following relations: (2) p s(e) x e = x e p r(e) = x e for all e ∈ E 1 .
(3) p r(e) y e = y e p s(e) = y e for all e ∈ E 1 . (4) y e x e ′ = δ e,e ′ p r(e) for all e, e ′ ∈ E 1 . (5) p v = {e∈E 1 |s(e)=v} x e y e for every v ∈ E 0 that emits edges.
We note that both the terminology used in this definition ("graph algebra"), as well as the notation, is quite similar to the terminology and notation which was already being employed in the context of graph C * -algebras. By examining the proof of [36,Theorem 3.5], and using Bergman's Theorem, we can in fact restate this fundamental result as follows. In the same groundbreaking article [36], Ara, Moreno, and Pardo were also able to establish a connection between the V-monoids of L K (E) and C * (E). We conclude our discussion of Historical Plot Line #1 in the development of Leavitt path algebras by again quoting Enrique Pardo: For us the motivation was to give an algebraic framework to all these families of (purely infinite simple) C * -algebras associated to combinatorial objects, say Cuntz-Krieger algebras and graph C *algebras. For this reason we always looked at properties that were known in C * case and were related to combinatorial information: we wanted to know which part of these results relies in algebraic information, and which ones in analytic information. So, we looked at K-Theory, stable rank, exchange property (in C * -algebras this is real rank zero property), prime and primitive ideals, the classification problem and Kirchberg-Phillips Theorem...
We will visit each of these topics later in the article.
The second historical plot line begins with the author's interest in Leavitt's algebras, specifically the algebras L K (1, n). For instance, these algebras were used in [1] to produce non-IBN rings having unexpected isomorphisms between their matrix rings; were used again in [2] to solve a question (posed in [79]) about strongly graded rings; and were subsequently investigated yet again in [3], in joint work with P.N.Ánh of the Rényi Institute of Mathematics (Hungarian Academy of Sciences, Budapest).
During a Spring 2001 visit to the University of Iowa,Ánh met the analyst Paul Muhly. 1 Subsequently,Ánh invited Muhly to give a talk at the Rényi Institute (during a 2003 trip that Muhly and his wife were making to Budapest anyway, to visit their son); it was during this visit that the two mathematicians began to consider the potential for connections between various topics. Muhly was one of the organizers of the May/June 2004 NSF -CBMS conference 2 "Graph Algebras: Operator Algebras We Can See", delivered by Iain Raeburn, held at the University of Iowa. Muhly consequently extended invitations to attend that conference to the author, toÁnh, and to a handful of other ring theorists. 3 During conference coffee break discussions, the algebraists began to realize 1 TheÁnh / Muhly meeting was quite fortuitous.Ánh was a visiting research guest of Kent Fuller at the University of Iowa during Spring Semester 2001. Fuller regularly went to lunch at various Iowa City restaurants with a group of his departmental colleagues, an excursion in which Paul Muhly was a frequent participant; Fuller of course invited Anh to join in.
2 National Science Foundation -Conference Board in Mathematical Sciences. The NSF-CBMS Regional Research Conferences in the Mathematical Sciences are a series of five-day conferences, each of which features a distinguished lecturer delivering ten lectures on a topic of important current research in one sharply focused area of the mathematical sciences.
3 V. Camillo, L. Márki, and E. Ortega also attended. that when one considered the "pre-completion" version of the graph C * -algebras, the remaining algebraic structure looked quite familiar, specifically, as some sort of modification of the well-known notion of a quiver algebra or path algebra.
Definition 1.17. Let F = (F 0 , F 1 , s, r) be a graph and K any field. The path K-algebra of F (also known as the quiver K-algebra of F ), denoted KF , is the K-vector space having basis F 0 ⊔ F 1 , with multiplication given by the K-linear extension of Gonzalo Aranda Pino visited the author's home institution for the period July 2004 through December 2004. 4 Early in Aranda Pino's visit, the author shared with him some of the ideas which had been discussed in Iowa City during the previous month. A few weeks of collaborative effort subsequently led to the following.
i : e i ∈ E 1 }, and the functions r ′ and s ′ are defined by setting Definition 1. 19. Let E be a finite graph and K any field. The Leavitt path K-algebra L K (E) is defined as the path K-algebra K E, modulo the relations: (CK1) e * i e j = δ ij r(e j ) for every e j ∈ E 1 and e * i ∈ (E 1 ) * . (CK2) v i = {e j ∈E 1 |s(e j )=v i } e j e * j for every v i ∈ E 0 which is not a sink. Some of the notation which was developed in the C * -algebra context is also used in the Leavitt path algebra world, e.g., the use of the "CK" labels to denote the two key relations. (Cf. Definition 1.10).
With both Leavitt's Theorem (part 2 of Theorem 1.4) and The Simplicity Theorem for graph C * -algebras (Theorem 1.11) in mind, the author and Aranda Pino focused their initial investigation on an internal, multiplicative question about the algebras L K (E): for which graphs E and fields K is L K (E) simple? Using techniques completely unlike those utilized to achieve Theorem 1. By making the obvious correspondences v ↔ p v , e ↔ x e , and e * ↔ y e , we see immediately: For a finite graph E and field K, the graph K-algebra of Definition 1.14 is the same algebra as the Leavitt path K-algebra of Definition 1.19.
It is of historical interest to note that the work on [7] was started in July 2004. Subsequently, [7] was submitted for publication in September 2004, accepted for publication in June 2005, appeared online in September 2005, and appeared in print in November 2005. On the other hand, the work on [36] was started in early 2004. Subsequently, [36] was submitted for publication in late 2004 (and posted on ArXiV at that time), and accepted for publication in early 2005, but did not appear in print until April 2007. So even though [7] appeared in print eighteen months prior to the appearance in print of [36], in fact most the mathematical work done to produce the latter preceded that of the former.
Both [7] and [36] should be viewed as the foundational articles on the subject.

Leavitt path algebras of row-finite graphs: general properties and examples
Section 1 of this article was meant to give the reader an overall view of the motivating ideas which led naturally to the construction of Leavitt path algebras. Over the next three sections we describe some of the key ideas and results for Leavitt path algebras arising from row-finite graphs. Subsequently, in Section 5 we relax this hypothesis on the graphs. (For those results which do not extend verbatim to the unrestricted case, we will indicate in the statement that the graph must be row-finite (or finite); otherwise, we will make no such stipulation in the statement.) Here is the formal definition of a Leavitt path algebra arising from a row-finite graph.
Definition 2.2. Let E = (E 0 , E 1 , s, r) be a row-finite graph and K any field. Let E denote the extended graph of E. The Leavitt path K-algebra L K (E) is defined as the path K-algebra K E, modulo the relations: (CK1) e * i e j = δ ij r(e j ) for every e j ∈ E 1 and e j for every non-sink v i ∈ E 0 . Equivalently, we may define L K (E) as the free associative K-algebra on generators (2) s(e)e = er(e) = e for all e ∈ E 1 .
It is established in [97] that the expected map from L C (E) to C * (E) is in fact injective. With this and the construction of the graph C * -algebra C * (E), we get Proposition 2.3. For any graph E, L C (E) is isomorphic to a dense * -subalgebra of C * (E).
The interplay between graphs and algebras will play a major role in the theory. It is important to note at the outset that in general, if F is a subgraph of E, then L K (F ) need not correspond to a subalgebra of L K (E), because the (CK2) relation imposed at a vertex v in L K (F ) need not be the same as the relation imposed at v in L K (E). For a row-finite graph E, a subgraph F is said to be complete in case, whenever v ∈ F 0 , then either s −1 (In other words, if v ∈ F 0 , then either v emits no edges in F , or emits the same edges in F as it does in E.) Perhaps not surprisingly, when F is a complete subgraph of E, then there is an injection of algebras L K (F ) ֒→ L K (E). Moreover, The assignment E → L K (E) can be extended to a functor L K from the category of row-finite graphs and complete graph inclusions to the category of K-algebras and (not necessarily unital) algebra homomorphisms. The functor L K commutes with direct limits. It follows that every L K (E) for a row-finite graph E is the direct limit of graph algebras corresponding to finite graphs.
Because of Proposition 2.4, it is often the case that a result which holds for the Leavitt path algebras of finite graphs can be extended to the row-finite case.
Definition 2.5. Let E be any graph and A any K-algebra. By the description of L K (E) as a quotient of a free associative K-algebra modulo the germane relations given in Definition 2.2, we immediately get the following result, which often proves to be quite useful in the subject. Proposition 2.6. (Universal Homomorphism Property of Leavitt path algebras) Let E be a graph, and suppose S is a Leavitt E-family in the K-algebra A. Then there exists a unique K-algebra homomorphism ϕ : L K (E) → A for which ϕ(v) = a v , ϕ(e) = b e , and ϕ(e * ) = c e for all v ∈ E 0 and e ∈ E 1 . Notation 2.7. A sequence of edges α = e 1 , e 2 , ..., e n in a graph E for which r(e i ) = s(e i+1 ) for all 1 ≤ i ≤ n − 1 is called a path of length n. We typically denote such α more simply by e 1 e 2 · · · e n . Each vertex v of E is viewed as a path of length 0. The set of paths of length n in E is denoted by E n ; the set of all paths in E is denoted Path(E). So we have Path(E) = ⊔ n∈Z + E n .
For α = e 1 e 2 · · · e n ∈ Path(E), s(α) denotes s(e 1 ), r(α) denotes r(e n ), and Vert(α) denotes the set {s(e 1 ), s(e 2 ), . . . , s(e n ), r(e n )}. The path e 1 e 2 · · · e n is closed if s(e 1 ) = r(e n ). A closed path c = e 1 e 2 · · · e n is simple in case s(e i ) = s(e 1 ) for all 2 ≤ i ≤ n. Such a simple closed path c is said to be based at v = s(e 1 ). A simple closed path c = e 1 e 2 · · · e n is a cycle in case there are no repeats in the list of vertices s(e 1 ), s(e 2 ), ..., s(e n ). E is called acyclic in case there are no cycles in E.
An exit for a path e 1 e 2 · · · e n is an edge f ∈ E 1 for which s(f ) = s(e i ) and f = e i for some The graph E satisfies Condition (L) in case every cycle in E has an exit.
The graph E satisfies Condition (K) in case no vertex in E is the base of exactly one simple closed path in E.
If α = e 1 e 2 · · · e n is a path in E, then we may view α as an element of the path algebra KE, and as an element of the Leavitt path algebra L K (E) as well. (In this sense, concatenation in the graph E is interpreted as multiplication in KE or L K (E).) We denote by α * the element e * n · · · e * 2 e * 1 of L K (E). We often refer to a path α = e 1 e 2 · · · e n of E (viewed as an element of L K (E)) as a real path, while an element of L K (E) of the form α * = e * n · · · e * 2 e * 1 is called a ghost path. Here are some easily verified basic properties of Leavitt path algebras.
Proposition 2.8. Let E be any graph and K any field (1) Every nonzero element r of L K (E) may be written (not necessarily uniquely) as if and only if E 0 is finite. In general, L K (E) has a set of enough idempotents, consisting of finite sums of distinct vertices.
(5) The map * : In particular, for Leavitt path algebras, the categories of left L K (E)-modules and right L K (E)-modules are isomorphic.

2.1.
Examples of familiar / "known" algebras which arise as Leavitt path algebras. We saw in Section 1 how specific algebras arise from Bergman's Theorem, starting with a specified monoid. We re-examine those here, and present additional examples as well.
Example 2.9. Full matrix K-algebras. Let A n denote the graph . This is not hard to see. We present two different approaches, in order to play up the germane ideas.
The first approach: consider the standard matrix units Since each vertex (other than v n ) emits a single edge, the (CK2) relation at these vertices becomes e i e * i = v i . Using this, it is straightforward to verify that the set is an A n -family in M n (K). So the Universal Homomorphism Property ensures the existence of a K-algebra homomorphism ϕ for which ϕ( That ϕ is an isomorphism is easily checked (for instance, by constructing the expected function ψ : M n (K) → L K (A n ), and verifying that ψ = ϕ −1 ).
The second approach: we analyze the monoid M An , define d = n i=1 a v i , and see easily that Full matrix rings over K arise as the Leavitt path algebras of graphs other than the A n graphs. In Theorem 3.1 below we will justify the isomorphisms asserted in the next two examples. These two examples play up the fact that non-isomorphic graphs may have isomorphic Leavitt path algebras. (This observation lies at the heart of much of the current research activity in Leavitt path algebras.) Example 2.10. Full matrix K-algebras, revisited. For n ∈ N let B n denote the graph Example 2.11. Full matrix K-algebras, again revisited.
For n ∈ N let D n denote the graph Proceeding in a manner similar to that utilized in Example 2.9, one can easily establish the following two claims. (See Example 1.7.) Example 2.12. The Laurent polynomial K-algebra. Let R 1 denote the graph Here is the Fundamental Example of Leavitt path algebras.
Example 2.13. Leavitt K-algebras. For n ≥ 2, let R n denote the graph , the Leavitt algebra of order n. The isomorphism is clear: using the description of the generators and relations for L K (1, n) given in ( †) above, v → 1, e i → x i , and e * i → y i . Example 2.14. The Toeplitz K-algebra. For any field K, the Jacobson algebra, described in [70], is the K-algebra A = K x, y | xy = 1 . This algebra was the first example appearing in the literature of an algebra which is not directly finite, that is, in which there are elements x, y for which xy = 1 but yx = 1. Let T denote the "Toeplitz graph" The isomorphism is not hard to write down explicitly. First, the set } is easily shown to be a T -family in A, so by the Universal Homomorphism Property of Leavitt path algebras there exists a K-algebra homomorphism ϕ : On the other hand, we define X = e * + f * , Y = e + f in A. Using (CK1) and (CK2) we get easily that XY = 1. This gives a K-algebra homomorphism ψ : A → L K (T ), the algebra extension of x → X and y → Y . It is easy to check that ϕ and ψ are inverses.
Example 2.15. Full matrix K-algebras over L K (E). Let E be any graph, K any field, and n ∈ N. The graph E(n) is defined as follows. For each v ∈ E 0 , one adds to E the following vertices and edges More generally, for any infinite set I, let B I denote the graph having vertices

Internal / multiplicative properties of Leavitt path algebras
Not surprisingly, a number of the key results in the subject focus on passing structural information from the directed graph E to the Leavitt path algebra L K (E), and vice versa; i.e., results of the form E has graph property P ⇐⇒ L K (E) has ring property Q.
The Simplicity Theorem (Theorem 1.20) is the quintessential result of this type. We will describe a number of additional such results in this section and the next. In the author's opinion, these results are quite interesting, some even remarkable, in their own right. Just as compellingly, some of these results have been utilized to produce heretofore unrecognized classes of algebras having interesting ring-theoretic properties. Looking ahead: in contrast, in the next section, we will engage in a discussion of the equally important "external / module-theoretic" properties of Leavitt path algebras. As described in Section 1, the "internal / multiplicative" and "external / module-theoretic" properties form the historical foundations of the subject. We will see in the final section that these also drive much of the current investigative energy.
3.1. Finite dimensional Leavitt path algebras. We start by analyzing the Leavitt path algebras of finite acyclic graphs. From a ring-theoretic point of view, these turn out to be the most basic (least interesting?) of all the Leavitt path algebras.
is direct follows by again using the hypothesis that the w i are sinks. Now let γδ * be any monic monomial in L K (E).
Otherwise, the (CK2) relation may be invoked at v, and we may write If r(e) is a sink, then the expression (γe)(δe) * is in t i=1 I(w i ); if not, then in the same manner one can use the (CK2) relation at r(e) to rewrite (γe)(δe) * . Since E is finite and acyclic, the process must terminate with expressions of the desired form. ✷ So for finite acyclic graphs, the resulting Leavitt path algebras are, among other things: unital semisimple; left artinian; and finite dimensional. Indeed, any of these three ring/algebra-theoretic properties characterizes the Leavitt path algebras of finite acyclic graphs, thus yielding three examples of results of type ( † †). Perhaps more importantly, Theorem 3.1 yields a result of the following type: among a certain class of graphs (specifically, finite acyclic), we can determine, using easy-tocompute graph-theoretic properties, which of those graphs yield isomorphic Leavitt path algebras (specifically, those for which the number of sinks, and the corresponding N i , are equal). So Theorem 3.1 may be viewed as a very basic type of Classification Theorem. Notation 3.2. Let E be any graph, and let v, w ∈ E 0 . We write v ≥ w in case there exists p ∈ Path(E) for which s(p) = v and r(p) = w.
Let X be a subset of E 0 . X is called hereditary in case, whenever v ∈ X and w ∈ E 0 and v ≥ w, then w ∈ X. X is called saturated in case, whenever v ∈ E 0 is regular and r(s −1 (v)) ⊆ X, then v ∈ X. (Less formally: X is saturated in case whenever v is a non-sink in E which emits finitely many edges, and the range vertices of all of those edges are in X, then v is in X as well.) Clearly both E 0 and ∅ are hereditary saturated subsets of E 0 , and clearly the intersection of any collection of hereditary saturated subsets of E 0 is again hereditary saturated. If S is any subset of E 0 , then S denotes the smallest hereditary saturated subset of E 0 which contains S; S is called the hereditary saturated closure of S. (Such exists by the previous observation.) The interplay between vertices E 0 of E on the one hand (viewed as idempotent elements of L K (E)), and ideals of L K (E) on the other, plays a central role in the ideal structure of L K (E). This connection clearly brings to light the roles of the two (CK) relations in this context. Proposition 3.3. Let E be any graph and K any field. Let I be an ideal of The subspaces R i are called the homogeneous components of R. The Leavitt path algebras admit a Z-grading, as follows. Any path K-algebra K E of an extended graph E is Z-graded, by setting deg(v) = 0 for v ∈ E 0 , and deg(e) = 1, deg(e * ) = −1 for e ∈ E 1 , and extending additively and multiplicatively. Since the two sets of relations (CK1) and (CK2) consist of homogeneous elements of degree 0 with respect to this grading on K E, the grading passes to the quotient algebra L K (E). In particular, for m ∈ Z, the homogeneous component L K (E) m of degree m consists of K-linear combinations of elements of the form αβ * , where r(α) = r(β), and ℓ(α) − ℓ(β) = m. A two-sided ideal I in a Z-graded ring R is called a graded ideal in case, whenever s ∈ I and s = j∈Z s j is the decomposition of s into homogeneous components, then s j ∈ I for each j ∈ Z. It is easy to show that if a two-sided ideal I in a Z-graded ring is generated by homogeneous elements of degree 0, then I is a graded ideal. In particular, for any set of vertices X ⊂ E 0 , the ideal I(X) of L K (E) is graded. In contrast, not all ideals of a Leavitt path algebra are necessarily graded; for instance, the ideal So on the one hand any ideal I of L K (E) gives rise to the hereditary saturated subset I ∩ E 0 of E 0 , while on the other, any subset X of E 0 gives rise to the graded ideal I(X) of L K (E). The perhaps-expected connection is the following.
In particular, every graded ideal of L K (E) is generated by vertices.
Sketch of Proof. It is not hard to show that I(X) = I(X). On the other hand, if v ∈ I(X), then by using an explicit, iterative description of the hereditary saturated closure of a set, one can show that v ∈ X. ✷ The connection between these two lattices does not hold verbatim in case E contains infinite emitters, as we will see in Section 5.
It was shown by Bergman [48] that if R is a Z-graded (unital) ring, then the Jacobson radical J(R) is necessarily a graded ideal. (See also [89,Theorem 2.5.40].) Using that J(R) contains no nonzero idempotents in any ring R, Proposition 3.4 yields the following nice "internal" result about Leavitt path algebras.
Corollary 3.5. Let E be any graph and K any field. Then L K (E) has zero Jacobson radical.
3.3. Ideals in Leavitt path algebras. In general, loosely speaking, the two key players in the graph E which drive the ideal structure of L K (E) are the vertices, and the cycles without exits. While the hereditary saturated subsets will dictate the graded structure of L K (E), the cycles without exits (when E contains such) provide additional structural nuances. The following result provides some motivation as to why this should be the case. For an element p( Theorem 3.6. The Reduction Theorem. [42, Proposition 3.1] Let E be any graph and K any field. Let 0 = x ∈ L K (E). Then there exist α, β ∈ Path(E) for which either: ( and c is a cycle without exits. In other words, we can transform (via multiplication by real paths and/or ghost paths) any element of L K (E) to either a nonzero multiple of a vertex, or to a nonzero polynomial in a cycle without exits.
Sketch of Proof. The proof uses an idea similar to the one Leavitt used in his proof of the Simplicity Theorem for L K (1, n) ([77, Theorem 2]). Essentially, starting with x, one shows that there is a path γ in E for which 0 = xγ ∈ KE. This is done by finding v ∈ E 0 for which xv = 0, then writing xv ∈ L K (E) in a form which minimizes the length of the ghost terms from among all possible representations of xv, and then applying an induction argument. With this in hand, one then modifies xγ via left multiplication by terms of the form δ * to "reduce" xγ to one of the two indicated forms. ✷ For For instance, for the Toeplitz graph T described in Example 2.14, let H be the (only nontrivial) hereditary saturated subset {w}. Then the cycle e is in C H . For any polynomial p(x) ∈ K[x], we may form the ideal I(w, p(e)) of L K (T ) generated by the two elements w and p(e).
In a similar way, for general graphs, using the data provided by a hereditary saturated subset H of E 0 , a set C = C(H) of cycles which miss H but all of whose exits land in H, and nontrivial polynomials P = P (C) = P (C(H)) in K[x] (one for each element of C), we can build an ideal in L K (E), namely, the ideal generated by H together with elements of L K (E) of the form p c (c).
Rephrased, starting with such (H, C(H), P (C(H)), we can build the ideal I(H, {p c (c)} c∈C ). Indeed, this process gives all the ideals of L K (E). Indeed, with not-hard-to-anticipate order relations defined on triples of the form (H, C, P ), there is a stronger form of Theorem 3.8, one which gives a lattice isomorphism between the set of appropriate triples and the lattice of two-sided ideals of L K (E).
There are some immediate consequences of Theorem 3.8. The most noteworthy of these is the Simplicity Theorem (Theorem 1.20): that L K (E) is simple if and only if the only hereditary saturated subsets of E 0 are ∅ and E 0 , and every cycle in E has an exit. (Of course the chronology here is reversed: the historically-significant Simplicity Theorem precedes the establishment of Theorem 3.8 by almost a decade.) This is seen quite readily. By Theorem 3.8, any ideal of L K (E) looks like I(H, {p c (c)} c∈C ). By hypothesis there are only two possibilities for H. When H = E 0 then C(H), and therefore P (C(H)), is empty, so that the only ideal of this form is I(E 0 ) = L K (E). On the other hand, when H = ∅, then, as by hypothesis every cycle in E has an exit, we get that C(H), and therefore P (C(H)), is empty here as well. So the only ideal of this second form is I(∅) = {0}, and the Simplicity Theorem follows.
Returning yet again to the Toeplitz graph T of Example 2.14, we see as a consequence of Theorem 3.8 that the complete set of ideals of L K (T ) consists of the three graded ideals I(∅) = {0}, I(w), and I(E 0 ) = L K (T ), together with the nongraded ideals of the form is a polynomial of degree at least 1 for which p(0) = 0.
Considering the stronger (admittedly unstated) form of Theorem 3.8, a second consequence (also a statement of type ( † †)) is the following description of the Leavitt path algebras satisfying the chain conditions on two-sided ideals. Proposition 3.9. Let E be a row-finite graph and K any field.
(1) L K (E) has the descending chain condition on two-sided ideals if and only if E satisfies Condition (K), and the descending chain condition holds in the lattice of hereditary saturated subsets of E 0 ([11, Theorem 3.9]).
(2) L K (E) has the ascending chain condition on two-sided ideals if and only if L K (E) has the ascending chain condition on graded two-sided ideals, if and only if the ascending chain condition holds in the lattice of hereditary saturated subsets of E 0 . In particular, the Leavitt path algebra L K (E) for every finite graph E has the a.c.c. on two-sided ideals ([11, Theorem 3.6]).

Discussion: The Rosetta Stone.
Of great interest in the study of Leavitt path algebras is the observation that many of the results in the subject seem to (quite mysteriously) mimic corresponding results for graph C * -algebras. For example, comparing the Simplicity Theorem for Leavitt path algebras (Theorem 1.20) with the Simplicity Theorem for graph C * -algebras (Theorem 1.11), we see that the conditions on E which yield simplicity of the associated graph algebra are identical in both cases. Suffice it to say that the proofs of the two Simplicity Theorems utilize significantly different tools one from the other. More to the point, even with the close relationship between L C (E) and C * (E) in mind (cf. Proposition 2.3), it is currently not understood as to whether either one of the Simplicity Theorems should "directly" imply the other.
We provide in Appendix 1 a list of additional situations in which an algebraic property of L C (E) is analogous to a topological property of C * (E), and for which the necessary and sufficient graphtheoretic property of E is identical in each case. A systematic reason which would explain the existence of so many such examples is usually referred to as the "Rosetta Stone of Graph Algebras". A good reference which contains in one place a discussion of both Leavitt path algebra and graph C * -algebra properties is [45]. We note that even the seemingly most basic of questions, "if L C (E) ∼ = L C (F ) as rings, is C * (E) ∼ = C * (F ) as C * -algebras?" (and its converse), has only been answered (in the affirmative) for restricted classes of graphs; the question in general remains open (see [18]). The search for the Rosetta Stone comprises one of the many current lines of research in the field.
3.4. Matrix rings over the Leavitt algebras. There are too many additional "internal / multiplicative" properties of Leavitt path algebras to include them all in this article. For a number of reasons (its connection to the Rosetta Stone and its important consequences outside of Leavitt path algebras, to name two), we spend some space here describing the Isomorphism Question for Matrix Rings over Leavitt algebras.
We reconsider the Leavitt algebras L K (1, n) for n ≥ 2, the motivating examples of Leavitt path algebras. Fix n and K, and let R denote L K (1, n). By construction we have R R ∼ = R R n as left R-modules; so by taking endomorphism rings and using the standard representation of these endomorphism rings as matrix rings, we get R ∼ = M n (R) as K-algebras. Indeed, since R R ∼ = R R 1+j(n−1) for all j ∈ N, we similarly get R ∼ = M 1+j(n−1) (R) as K-algebras for all j ∈ N. Now starting from a different point of view: once we have established a ring isomorphism S ∼ = M ℓ (S) for some ring S and some ℓ ∈ N, by taking ℓ×ℓ matrix rings of both sides t times, we get S ∼ = M ℓ t (S) for any t ∈ N. In particular, we have R ∼ = M n t (R) for all t ∈ N; indeed, using the previous observation, we have more generally that The question arises: if R = L K (1, n) is isomorphic as K-algebras to some p × p matrix ring over itself, must p be an integer of the form (1 + j(n − 1)) t ? It is not hard to give an example where the answer is negative: one can show (by explicitly writing down matrices which multiply correctly) that R = L K (1, 4) has R ∼ = M 2 (R), and 2 is clearly not of the indicated form when n = 4. But an analysis of this particular case leads easily to the observation that if d | n t for some t ∈ N, then R ∼ = M d (R) (by an explicitly described isomorphism).
The upshot of the previous observations is the natural question: The analogous question was posed for matrix rings over the Cuntz algebras O n in [84]: given n ∈ N, The resolution of this analogous question required many years of effort. In the end, the solution may be obtained as a consequence of the Kirchberg Phillips Theorem: So while the C * -algebra question was resolved for matrices over the Cuntz algebras, the solution did not shed any light on the analogous Leavitt algebra question, both because the C * -solution required analytic tools, and because it did not produce an explicit isomorphism between the germane algebras. An easy consequence of [76,Theorem 5] is that, when g.c.d. 1, n)). With this and the Cuntz algebra result in hand, it made sense to conjecture that so that the conjecture is validated in this situation. The key idea was to explicitly produce an isomorphism in situations more general than this. The method of attack was clear: one reaches the desired conclusion by finding a subset of M d (L K (1, n)) of size 2n which both behaves as in ( †), and generates M d (L K (1, n)) as a K-algebra.
The smallest pair d, n for which g.c.d.(d, n − 1) = 1 but d | n t for any t ∈ N is the case d = 3, n = 5. Finding a subset of M 3 (L K (1, 5)) of size 2 · 5 = 10 which behaves as in ( †) is not hard; for instance, by (somewhat) mimicking the process used in the d | n t case, one is led to consider these five matrices in M 3 Although these ten matrices satisfy ( †), they do not generate all of M 3 (L K (1, 5)) (in retrospect, one can show that these ten matrices do not generate the matrix unit e 1,3 , for example). The breakthrough came from a process which involves viewing matrices over Leavitt algebras as Leavitt path algebras for various graphs, and then manipulating the underlying graphs appropriately. This process led to the consideration of the following (very similar, yet) different set of five The only differences between the two sets of ten matrices lie in the fifth and tenth matrices, where two of the entries have been interchanged. It is now not hard to show that this second set of ten matrices satisfies ( †), and generates M 3 (L K (1, 5)) as a K-algebra. The underlying idea which prompted the interchange of entries is purely number-theoretic, and is fully described in Appendix 2. In short, the integer 3 is used to partition the set {1, 2, 3, 4, 5} into the subsets {1, 4} ⊔ {2, 3, 5}; then, in order to build the first five matrices of this second set, one inserts monomials having leftmost factor x t into row i in such a way that i and t are in the same subset with respect to this partition. So putting the term x 3 in row 1 and x 4 in row 2 (as is done in the fifth matrix of the first displayed set) will not work; on the other hand, putting x 4 in row 1 and x 3 in row 2 is consistent with this partition, and leads to a collection with the desired properties. Once this observation was made, the generalization to arbitrary d, n was not overly difficult.
There are two historically important consequences of the explicit construction of the isomorphisms which yield Theorem 3.10. First, this context is one of the few places where a result from one side of the graph algebra universe yields a result in the other. Specifically, when K = C and g.c.d.(d, n − 1) = 1, the explicit nature of an isomorphism L C (1, n) ∼ = M d (L C (1, n)) constructed in the proof of Theorem 3.10 allows (by a straightforward completion process) for the explicit construction of an isomorphism O n ∼ = M d (O n ). (The description of such an explicit isomorphism came as more than a bit of a surprise to some researchers in the C * -community.) Second, the explicit construction led to the resolution of a longstanding question in group theory. In the mid 1970's, G. Higman produced, for each pair r, n ∈ N with n ≥ 2, an infinite, finitely presented simple group, denoted G + n,r . (The groups G + n,r are called the Higman-Thompson groups.) Higman was able to establish some sufficient conditions regarding isomorphisms between these groups, but did not have a complete classification. However, in 2011, Enrique Pardo showed how the construction given in the proof of Theorem 3.10 could be brought to bear in this regard. Sketch of Proof. The (=⇒) direction was already known by Higman. Conversely, one first shows that G + n,ℓ can be realized as an appropriate subgroup of the invertible elements of M ℓ (L C (1, n)) for any ℓ ∈ N. Then one verifies that the explicit isomorphism from M r (L C (1, n)) to M s (L C (1, n)) provided in the proof of Theorem 3.10 takes G + n,r onto G + n,s . ✷ For any three positive integers t, n, r (with n ≥ 2), Brin [51] constructed a group (denoted tV n,r ) which can be viewed as a t-dimensional analog of the Higman-Thompson group, in that 1V n,r ∼ = G + n,r . (The groups tV n,r are called the Brin-Higman-Thompson groups.) On the other hand, for t a positive integer and n ≥ 2, one may consider the t-fold tensor product algebra L K (1, n) ⊗t of L K (1, n) with itself t times. (We will more fully consider such tensor products in the following subsection.) In [60], Dicks and Martínez-Pérez beautifully generalize Pardo's Theorem 3.11 by showing that tV n,r is isomorphic to an appropriate subgroup of the invertible elements of M r (L K (1, n) ⊗t ) (specifically, the positive unitaries), and subsequently use this isomorphism to establish that tV n,r ∼ = t ′ V n ′ ,r ′ if and only if t = t ′ , n = n ′ , and g.c.d.(r, n − 1) = g.c.d.(r ′ , n ′ − 1). Along the way, Dicks and Martínez-Pérez present a streamlined, somewhat more intuitive proof of Theorem 3.10.
3.5. Tensor products of Leavitt path algebras. Of fundamental importance in the theory of graph C * -algebras is the fact that O 2 ⊗ O 2 ∼ = O 2 (homeomorphically). This isomorphism is not explicitly described; rather, it follows (originally) from some deep work done by Elliott (and streamlined in [88]). The isomorphism O 2 ⊗ O 2 ∼ = O 2 is utilized in the proof of the Kirchberg Phillips Theorem. (The C * -algebra O 2 is nuclear, so that there is no ambiguity in forming this tensor product.) In the context of the previous paragraph, together with the Rosetta Stone discussion, it is then natural to ask: is L K (1, 2)  [16] that if E is a graph containing at least one cycle, then L K (E) is not von Neumann regular, so, in particular, L K (1, 2) ∼ = L K (R 2 ) is not von Neumann regular. So the flat dimension, and therefore also the global dimension, of L K (1, 2) ⊗ K L K (1, 2) is at least 2, so that L K (1, 2) ⊗ L K (1, 2) cannot be a Leavitt path algebra (again using Theorem 1.15 ′ ), and so can't be isomorphic to L K (1, 2).
A second proof (unpublished) was offered by Jason Bell and George Bergman. Effectively, Bell and Bergman explicitly constructed a left L K (1, 2)-module M (involving functions on [0, 1) ⊆ R having finite support in Q of the form n/2 j ), and showed that the left L K (1, 2) ⊗ L K (1, 2)-module M ⊗ M has projective dimension 2, so that L K (1, 2) ⊗ L K (1, 2) has global dimension at least 2, and thus (arguing as did Dicks) cannot be isomorphic to any Leavitt path algebra.
The third approach to verifying that L K (1, 2) ⊗ L K (1, 2) ∼ = L K (1, 2) is the most general of the three. Utilizing Hochschild homology, Ara and Cortiñas in [28] showed (among many other things) the following, from which the result of interest follows immediately.
and {F j } n j=1 are finite graphs, each containing at least one cycle, and let K be any field.
, then m = n. Two currently unresolved questions about the tensor products of Leavitt path algebras will be given in Section 6.
3.6. Some additional internal / multiplicative properties of Leavitt path algebras. We conclude the section by presenting five additional multiplicative properties of Leavitt path algebras: primeness; the center; Gelfand Kirillov dimension; wreath products; and the simplicity of the corresponding bracket Lie algebra.
A ring R is prime in case for any two-sided ideals I, J of R, if IJ = {0} then I = {0} or J = {0}. A graph E is called downward directed if, for any two vertices v, w ∈ E 0 , there exists a vertex u ∈ E 0 for which v ≥ u and w ≥ u. Sketch of Proof. (⇒) If R denotes L K (E), and v, w ∈ E 0 , then the ideals RvR and RwR are each nonzero, so that RvRRwR = {0}, so that vRw = {0}, which yields a nonzero element of the form αβ * with s(α) = v, and r(β * ) = s(β) = w, so that u = r(α) has the desired property.
(⇐) The converse can be proved 'elementwise', but it is easier to invoke [80, Proposition 5.2.6(1)], which implies that for a Z-graded ring, primeness is equivalent to graded primeness. So we need only check that if I, J are nonzero graded ideals, then IJ = {0}. But by Proposition 3.4 (or its generalization Theorem 5.4 given below), any nonzero graded ideal contains a vertex; so if v ∈ E 0 ∩I and w ∈ E 0 ∩ J, and u ∈ E 0 with v ≥ u and w ≥ u, then 0 = u = u 2 ∈ IJ. ✷ For a ring R, the center Z(R) = {r ∈ R | rx = xr for all x ∈ R}. It is well-known that Z(M n (K)) = K · I n (where I n denotes the identity matrix in M n (K)). Additionally, this easily yields that the center of M N (K) is {0}. The following result includes these observations as specific cases.
For a K-algebra A, the Gelfand-Kirillov Dimension GK K (A) is an algebraic invariant of A which, loosely speaking, measures how far A is from being finite dimensional. (Finite dimensional algebras have GK dimension 0. On the other hand, the free associative K-algebra on two generators has GK dimension ∞. Such an algebra is said to have exponential growth; otherwise, the algebra has polynomially bounded growth. See e.g. [72] for a full description.) If C and C ′ are two disjoint cycles (i.e., Vert(C) ∩ Vert(C ′ ) = ∅), the symbol C ⇒ C ′ indicates that there is a path which starts in Vert (C) and ends in Vert(C ′ ). A sequence of disjoint cycles C 1 , ..., C k is a chain of length k in case C 1 ⇒ · · · ⇒ C k . Let d 1 denote the maximal length of a chain of cycles in E, and let d 2 denote the maximal length of a chain of cycles each of which has an exit. (1) L K (E) has exponential growth if and only if there exist a pair of distinct cycles in E which are not disjoint.
(2) In case L K (E) has polynomially bounded growth, then the GK dimension of Further results regarding Leavitt path algebras of polynomially bounded growth, and of the automorphism groups of some specific such algebras, are presented in [24].
For a countable dimensional K-algebra C and ring-theoretic property P, an affinization of A with respect to P is an embedding of C in a finitely generated (i.e., affine) K-algebra D, for which, if C has P, then so does D.
Let E be a row-finite graph and A any associative K-algebra. In [20], the authors present the construction of the wreath product, denoted A wr L K (E). In case W is a hereditary saturated subset of E 0 , then the wreath product construction allows for the realization of L K (E) as the wreath product of two Leavitt path algebras, namely, as L K (W ) wr L K (E/W ). Furthermore, let T be the Toeplitz graph of Example 2.14. Then the wreath product A wr L(T ) is isomorphic to a K-algebra of the form K[x, x −1 ] + M N (A) (with multiplication explicitly described). This algebra can then be embedded in an algebra of the form K[x, x −1 ] + RCFM N (A), where RCFM N (A) is the (unital) ring of those N×N matrices with entries in A, for which each row and each column contains at most finitely many nonzero entries. One may then build in a natural way an affine K-algebra B, generated by four elements, for which K[x, Theorem 3.16. [20] For an associative K-algebra A let B be the affine K-algebra described above.
(1) There exists a unital algebra A for which B is an affinization of K[x, x −1 ] + M N (A) with respect to the property non-nil Jacobson radical.
(2) There exists a unital algebra A for which B is an affinization of K[x, x −1 ] + M N (A) with respect to the property non-nilpotent locally nilpotent radical.
Both of the constructs mentioned in Theorem 3.16 give a systematic approach to what had been previously longstanding ring-theoretic questions.  As it turns out, the condition given in Theorem 3.17 for the simplicity of [L K (E), L K (E)] depends not only on the structure of E but also on the characteristic of K (see [15,Examples 28 and 29]). The K-dependence of a result about Leavitt path algebras is very much the exception. But for one intriguing additional example, see Theorem 6.2 and the subsequent discussion.
By introducing and utilizing the notion of a balloon over a subset of E 0 , Alahmedi and Alsulami are able to extend Theorem 3.17 to all row-finite graphs (specifically, the simplicity of L K (E) is not required); see [23,Theorem 2]. For instance, it is shown in [23] that the graph E given here has the property that the Lie algebra [L K (E), L K (E)] is simple, even though the Leavitt path algebra L K (E) is not simple.
In related work [22], the same two authors analyze the simplicity of the Lie algebra of * -skewsymmetric elements of a Leavitt path algebra.

Module-theoretic properties of Leavitt path algebras
The module theory of Leavitt path algebras has for the most part been focused on the structure of the finitely generated projective L K (E)-modules, owing to the Ara / Moreno / Pardo Realization Theorem (Theorem 1.15) describing V(L K (E)). In this section we take a closer look at the structure of these projectives, specifically, the purely infinite modules. Of central interest here is the question of whether or not the analog of the Kirchberg Phillips Theorem (Theorem 1.12) holds for Leavitt path algebras; we present in Theorem 4.8 the Restricted Algebraic KP Theorem. We next look at the structure of some simple (non-projective) L K (E)-modules. We conclude by considering some monoid-theoretic properties of V(L K (E)).

4.1.
Purely infinite simplicity. We have seen that the cycle structure of the graph E, and the existence of exits for those cycles, is a significant factor driving the algebraic structure of the Leavitt path algebra L K (E). We have also seen behavior in the Leavitt algebras L K (1, n) that at first glance seems somewhat exotic: R R ∼ = R R n as left R-modules. Specifically, the module R R has the property that R R ∼ = R R⊕P where P = {0}; i.e., that R R has a nontrivial direct summand which is isomorphic to itself. (Nontrivial here means that the complement of the direct summand is nonzero.) This same sort of behavior is manifest in L K (E) when E has cycles with exits.  Proof. Clearly L K (E)v = L K (E)cc * + L K (E)(v − cc * ). But the sum is direct: if xcc * = y(v − cc * ) for x, y ∈ L K (E), then multiplying both sides on the right by cc * yields xcc * = y(cc * − cc * ) = 0. That L K (E)v ∼ = L K (E)cc * as left L K (E)-modules follows from the previous Remark. Since e is an exit for c we have e * c = 0 by (CK1). Now to show that the complement L K (E)(v − cc * ) is nonzero, assume to the contrary that v − cc * = 0. Multiplying both sides on the left by ee * gives ee * − 0 = 0, thus giving ee * = 0, which is impossible. ✷ An idempotent x ∈ R is called infinite in case Rx is infinite. The ring R is called purely infinite simple in case R is simple, and each nonzero left ideal of R contains an infinite idempotent. Purely infinite simple rings were first introduced in [35]; the idea was born in the context of C * -algebras. Clearly a purely infinite module can satisfy neither of the two chain conditions, nor can it have finite uniform dimension.
With the Simplicity Theorem in hand, and with Proposition 4.2 as guidance, some medium-level effort yields the following.  In the context of Leavitt path algebras, the purely infinite simple algebras play an especially intriguing role. For any ring S, the Grothendieck group K 0 (S) is the universal group corresponding to the abelian monoid V(S). (Here universal means that any homomorphism from V(S) to an abelian group G necessarily factors through K 0 (S).) When V(S) ∼ = Z + (as is often the case in general, e.g., when S is a field or S = Z), then one gets K 0 (S) ∼ = Z, by "adding in the negatives". As it turns out, however, if S is purely infinite simple, then V(S) \ {[0]} is a group, precisely K 0 (S). This is perhaps counterintuitive at first glance: although V(S) has an identity element (namely, Although the converse is not true for arbitrary rings, when one restricts to the class of Leavitt path algebras, then the converse is true as well [83]: L K (E) is purely infinite simple if and only if V(L K (E)) \ {[0]} is a group (necessarily K 0 (L K (E))). Moreover, this group is easy to describe in this situation. As is standard, for a finite graph E with E 0 = {v 1 , v 2 , . . . , v n }, the incidence matrix of E is the |E 0 | × |E 0 | Z + -valued matrix A E , where A E (i, j) equals the number of edges e for which s(e) = v i and r(e) = v j . By interpreting the (CK2) relation as it plays out in V(L K (E)), one gets where I n denotes the n × n identity matrix.
In other words, when L K (E) is purely infinite simple, then K 0 (L K (E)) is the cokernel of the linear transformation I n − A E : Z n → Z n induced by matrix multiplication.
As an easy example of how this plays out in an already-familiar situation, suppose E = R m , the graph with one vertex and m loops. Then A E = (m), so I − A E is the 1 × 1 matrix (1 − m), and K 0 (L K (E)) ∼ = Z 1 /(1 − m)Z 1 = Z/(m − 1)Z, as we've seen previously.

4.2.
Towards a Classification Theorem for purely infinite simple Leavitt path algebras. In many endeavors in which an object from one class is associated to an object in another, a fundamental question is to identify the stalks of the process; that is, determine which objects from the first class correspond to the same object in the second. Asked in the current context: if two graphs E, F produce the "same" Leavitt path algebra (up to isomorphism, or up to Morita equivalence, or up to some other ring-theoretic invariant), can anything be said about the relationship between E and F ? As seen in Theorem 3.1, if E and F are finite acyclic graphs for which L K (E) ∼ = L K (F ), then E and F have the same number of sinks, and the same number of directed paths ending at those sinks. (An additional easy consequence of Theorem 3.1 is that if L K (E) is Morita equivalent to L K (F ), then E and F have the same number of sinks.) We spend some time here investigating this question in the context of purely infinite simple Leavitt path algebras. The reason is twofold: this investigation plays up an important relationship between Leavitt path algebras and symbolic dynamics, and also provides the foundation for much of the current research focus in Leavitt path algebras. The discussion here will be quite broad and intuitive; for details, the standard reference is [78].
For a finite directed graph E, one defines the notion of a "flow" (essentially, "flow of information") through the graph. Two graphs E and F are "flow equivalent" in case the collection of flows through E match up appropriately with the collection of flows through F . Two matrices with entries in Z + are called flow equivalent in case the directed graphs corresponding to the two matrices are flow equivalent. The directed graph E (or the corresponding incidence matrix A E ) is called (1) irreducible if for any pair v, w ∈ E 0 there is a path from v to w; There are a number of ways to systematically modify a directed graph. As an intuitive example, expansion at v modifies the graph E to the graph E v as indicated here.
It can easily be shown that the graphs E and E v are flow equivalent. In a similar manner, one may describe five more systematic modifications of a graph (each having the property that the original graph is flow equivalent to the modified graph): contraction (the inverse of expansion); out-split, as well as its inverse out-amalgamation; and in-split, as well as its inverse in-amalgamation. The specific descriptions of these "graph moves" are given in Appendix 3.
The second deep, fundamental theorem germane to the current discussion is The Parry / Sullivan Theorem. Two finite directed graphs are flow equivalent if and only if one can be gotten from the other by a sequence of transformations involving these six graph moves.
Combining Franks' Theorem with the Parry / Sullivan Theorem, we get Theorem 4.5. Suppose E and F are irreducible essential nontrivial graphs. Then Z n /(I n −A E )Z n ∼ = Z m /(I m − A F )Z m and det(I − A E ) = det(I − A F ) if and only if E can be obtained from F by some sequence of graph moves, with each move one of the six types described above.
We are now in position to present the (miraculous?) bridge between the ideas from flow dynamics and those of Leavitt path algebras. First, using the Purely Infinite Simplicity Theorem (Theorem 4.3) and some straightforward graph theory, it is not hard to show that E is irreducible, essential, and nontrivial if and only if E has no sources and L K (E) is purely infinite simple. Next, Proposition 4.6. Suppose E is a graph for which L K (E) is purely infinite simple. Suppose F is gotten from E by doing one of the six aforementioned graph moves. Then L K (E) and L K (F ) are Morita equivalent. In particular, L K (F ) is purely infinite simple. In addition, if v is a source in E, and F is gotten from E by eliminating v and all edges e ∈ E 1 having s(e) = v, then L K (E) and L K (F ) are Morita equivalent.
Sketch of Proof. It is not hard to show that an isomorphic copy of L K (E) can be viewed as a (necessarily full, by simplicity) corner of L K (G) (or vice-versa), where E and G are related by one of the graph moves. ✷ The previous discussion yields the first of two desired results.
Theorem 4.7. Let E and F be finite graphs and K any field. Suppose L K (E) and L K (F ) are purely infinite simple. If then L K (E) and L K (F ) are Morita equivalent.
Sketch of Proof. Suppose E and/or F have sources; then using Proposition 4.6 we may construct graphs E ′ and F ′ for which L K (E ′ ) and L K (F ′ ) are purely infinite simple, L K (E) is Morita equivalent to L K (E ′ ), and L K (F ) is Morita equivalent to L K (F ′ ), where E ′ and F ′ have no sources. But since Morita equivalent rings have isomorphic K 0 groups, and because (it's straightforward to show that) det(I − A E ) = det(I − A E ′ ) and det(I − A F ) = det(I − A F ′ ), we have that the hypotheses of Theorem 4.5 are satisfied for E ′ and F ′ . Thus F ′ can be gotten from E ′ by a sequence of appropriate graph moves. But again invoking Proposition 4.6, each of these moves preserves Morita equivalence. So L K (E ′ ) is Morita equivalent to L K (F ′ ), and the result follows. ✷ Sketch of Proof. For any Morita equivalence Φ :

The third deep, fundamental result of interest here is
as rings. Now apply Theorem 4.7 together with Huang's Theorem.
✷ As an example of how the Restricted Algebraic KP Theorem can be implemented, let E be the graph Then using the description provided in Proposition 4.4, we get K 0 (L K (E)) ∼ = Z/3Z; moreover, under this isomorphism, [L K (E)] → 1. Easily we get det(I − A E ) = −3 < 0. But the Leavitt path algebra L K (R 4 ) ∼ = L K (1, 4) has precisely the same data associated with it, so we conclude that L K (E) ∼ = L K (1, 4).
In Section 6 we describe how the Restricted Algebraic Kirchberg Phillips Theorem has been acting as a springboard for much of the current research energy in the subject.

4.3.
Simple L K (E)-modules. We now move our focus on L K (E)-modules from projectives to simples.
Let p be an infinite path in E; that is, p is a sequence e 1 e 2 e 3 · · · , where e i ∈ E 1 for all i ∈ N, and for which s(e i+1 ) = r(e i ) for all i ∈ N. (N.b.: an infinite path in E is not an element of Path(E), nor of the Leavitt path algebra L K (E).) The set of infinite paths in E is denoted by E ∞ . For p = e 1 e 2 e 3 · · · ∈ E ∞ and n ∈ N, p >n denotes the infinite path e n+1 e n+2 · · · .
Let c be a closed path in E. Then ccc · · · is an infinite path in E, denoted by c ∞ , and called a cyclic infinite path. A closed path d is irreducible in case d cannot be written as e j for any closed path e and j > 1. For any closed path c there exists an irreducible d for which c = d n ; then c ∞ = d ∞ as elements of E ∞ .
For p, q ∈ E ∞ , p and q are tail equivalent (written p ∼ q) in case there exist integers m, n for which p >m = q >n (i.e., in case p and q eventually become the same infinite path). For p ∈ E ∞ , [p] denotes the ∼ equivalence class of p. An element p of E ∞ is rational in case p ∼ c ∞ for some irreducible closed path c; otherwise p is irrational. For instance, in is an irrational infinite path. In any graph E for which there exists a vertex having two distinct irreducible closed paths based at that vertex, it is not hard to show that there are uncountably many irrational infinite paths in E ∞ . Additionally, there are infinitely many irreducible paths in such a situation (and thus infinitely many tail-inequivalent infinite rational paths); for instance, any path of the form ef i for i ∈ Z + is irreducible in R 2 .
Definition 4.9. Let p be an infinite path in the graph E, and let K be any field. Let V [p] denote the K-vector space having basis [p], consisting of the distinct elements of E ∞ which are tail-equivalent to p. For v ∈ E 0 , e ∈ E 1 , and A module of the form V [p] as in Theorem 4.10 is called a Chen simple L K (E)-module. In [39], Ara and Rangaswamy describe those Leavitt path algebras L K (E) which admit at most countably many simple left modules (Chen simples or otherwise) up to isomorphism. Building on an observation made prior to Definition 4.9, one sees that the structure of K plays a role in this result, in that when L K (E) has this property, and E contains cycles, then necessarily K must be countable.
It is possible to explicitly describe projective resolutions for the Chen simple modules. Let β = e 1 e 2 · · · e n ∈ Path(E) or β = e 1 e 2 · · · ∈ E ∞ . For each i ≥ 0 (and i ≤ n − 1 if β = e 1 · · · e n ∈ Path(E)), let (1) Let c be an irreducible closed path in E, with v = s(c). Then V [c ∞ ] is finitely presented (in fact, singly presented); a projective resolution of V [c ∞ ] is given by (2) Let p ∈ E ∞ be an irrational infinite path in E for which no element of Vert(p) is an infinite emitter. Then is a projective resolution of V [p] . In particular, V [p] is finitely presented if and only if X i (p) is nonempty for at most finitely many i ∈ Z + .
Theorem 4.11 sharpens and clarifies some of the results of [38]. The explicit description of projective resolutions given in Theorem 4.11 can be used to (easily) show that V [c ∞ ] is never projective, and that V [p] (for p irrational) is not projective when V [p] is not finitely presented (e.g., whenever E is a finite graph). Consequently, these two types of modules admit nontrivial extensions, some of which are captured in the following result.
Theorem 4.12. [14] Let E be a finite graph and K any field. Let T be a Chen simple module.
(1) Let d be an irreducible closed path in E with v = s(d). Then As a consequence of Theorem 4.12, whenever E is a graph containing at least one cycle, then (non-projective) indecomposable L K (E)-modules of any desired finite length can be constructed.
We close this subsection on simple L K (E)-modules by noting that Rangaswamy [87] has given a construction of such modules arising from the infinite emitters v of E 0 . 4.4. Additional module-theoretic properties of L K (E). The previous discussion in this section first focused on projective modules, then on non-projective simple modules, over Leavitt path algebras. We conclude the section by mentioning some monoid-theoretic properties of M = V(L K (E)). As the V-monoid of a ring, M is of course conical, and contains a distinguished element (as described prior to Theorem 1.6). But there are two important additional properties of V(L K (E)), both of which yield information about the decomposition of projective L K (E)-modules.
Suppose that M is a left R-module which admits two direct sum decompositions M = A 1 ⊕ A 2 = B 1 ⊕ B 2 . We ask whether there is necessarily some relationship between the two decompositions, indeed, whether there is some compatible "refinement" of these which allows for the systematic formation of each of the summands. More formally, suppose A 1 ⊕ A 2 = B 1 ⊕ B 2 as left R-modules. Then a refinement of this pair of direct sums consists of left R-modules M 11 , M 21 , M 12 , and M 22 , for which: A second type of decomposition of modules relates to cancellation of direct summands. Clearly in general an isomorphism In various situations it is natural to require a stronger relationship between such isomorphic direct sums, prior to trying to cancel C. One possible approach is as follows. A ring R is called separative in case it satisfies the following property: If A, B, C ∈ V(R) satisfy A ⊕ C ∼ = B ⊕ C, and C is isomorphic to direct summands of both A n and B n for some n ∈ N, then A ∼ = B. (Note that this additional condition obviously renders moot the previous example.) In fact, the class of primely generated refinement monoids satisfies many other nice cancellation properties, e.g. unperforation. We will revisit refinement monoids at the end of Section 6.

Classes of algebras related to, or motivated by, Leavitt path algebras of row-finite graphs
Historically, Leavitt path algebras were first defined only in the context of row-finite graphs. Subsequently, the more general definition of Leavitt path algebras for countable graphs ( [9]), and then truly arbitrary graphs ( [66]), appeared in the literature. The original notion of a Leavitt path algebra for row-finite graphs has been generalized in other ways as well, including: the construction of Leavitt path algebras for separated graphs; Cohn path algebras; Kumjian-Pask algebras of higher ranks graphs; Leavitt path rings; and more. In this section we give an overview of some of these Leavitt-path-algebra-inspired structures. 5.1. Leavitt path algebras for arbitrary graphs. Suppose E is a graph which contains an infinite emitter v; that is, the set s −1 (v) = {e ∈ E | s(e) = v} is infinite. Then in a purely ringtheoretic context, the symbol e∈s −1 (v) ee * , which would be the natural generalization of the (CK2) relation imposed at v, is not defined. Even in the analytic context of graph C * -algebras, where convergence properties might allow for some sort of appropriate interpretation of an infinite sum, an expression of the form e∈s −1 (v) s e s * e proves to be problematic, in part owing to the fact that {s e s * e | e ∈ s −1 (v)} is an infinite set of orthogonal projections. So, somewhat cavalierly, we simply choose not to invoke any (CK2)-like relation at infinite emitters. We recall that a vertex v ∈ E 0 is regular in case 0 < |s −1 (v)| < ∞.
Definition 5.1. Let E = (E 0 , E 1 , s, r) be any graph, and K any field. Let E denote the extended graph of E. The Leavitt path K-algebra L K (E) is defined as the path K-algebra K E, modulo the relations: (CK1) e * e ′ = δ e,e ′ r(e) for all e, e ′ ∈ E 1 .
Equivalently, we may define L K (E) as the free associative K-algebra on generators (2) s(e)e = er(e) = e for all e ∈ E 1 .
So the definition of a Leavitt path algebra for arbitrary graphs is essentially word-for-word identical to that for row-finite graphs (since "regular" and "non-sink" are identical properties in the row-finite case); there is simply no (CK2) relation imposed at any vertex which is the source vertex of infinitely many edges.
The generalization from Leavitt path algebras of row-finite graphs to those of arbitrary graphs was achieved in two stages. Owing to the hypotheses typically placed on the corresponding graph C * -algebras (in order to ensure separability), the initial extension for Leavitt path algebras was to graphs having countably many vertices and edges. It is shown in [9] that the Leavitt path algebra of any such countable graph is Morita equivalent to the Leavitt path algebra of a suitably defined row-finite graph, using the desingularization process. Subsequently, the foundational results regarding Leavitt path algebras for arbitrary graphs were presented in [66]. Among other things, Goodearl established a suitable definition and context for morphisms between graphs (so-called CKmorphisms). He was then able to show that direct limits exist in the appropriately defined graph category (denoted CKGr), and that the functor L K from CKGr to the category of K-algebras preserves direct limits.
The generalization to Leavitt path algebras of arbitrary graphs (from those of row-finite graphs) indeed expands the Leavitt path algebra universe. For instance, it was shown in [17] that L K (E) is Morita equivalent to L K (F ) for some row-finite graph F if and only if E contains no uncountable emitters (i.e., in case the set s −1 (v) is at most countable for each v ∈ E 0 ). So, for instance, let I be an uncountable set, and let D I denote the graph consisting of two vertices v, w, and edges {e i | i ∈ I}, where s(e i ) = v and r(e i ) = w. Then L K (D I ) is isomorphic to the (unital) K-algebra generated by M I (K)⊔{Id}, where Id is the I ×I identity matrix. So L K (D I ) is not Morita equivalent (let alone, isomorphic) to the Leavitt path algebra of any row-finite graph. Similarly, if R c denotes the "rose with uncountably infinitely many petals" graph, then L K (R c ) is not Morita equivalent to L K (F ) for any row-finite graph F .
In this expanded universe of Leavitt path algebras for arbitrary graphs, many of the results established in the row-finite case generalize verbatim, but many do not. One of the main differences is that in the general case, we may pick up many new idempotents inside L K (E) for which there are no counterparts in the row-finite case. For instance, let v ∈ E 0 , and let e ∈ s −1 (v). Then the element x = v − ee * of L K (E) is easily shown to be an idempotent. If v is a regular vertex, then x = f ∈s −1 (v),f =e f f * by the (CK2) relation. On the other hand, if v is an infinite emitter, then x has no such analogous representation.
We recall the graph-theoretic ideas given in Notation 3.2: a subset X of E 0 is hereditary in case, whenever v ∈ X and w ∈ E 0 and v ≥ w, then w ∈ X; X is saturated in case, whenever v ∈ E 0 is regular and r(s −1 (v)) ⊆ X, then v ∈ X.
Definition 5.2. Let E be any graph, and let H be a hereditary subset of E 0 . A vertex v ∈ E 0 is a breaking vertex of H in case v is in the set In words, B H consists of those vertices which are infinite emitters, which do not belong to H, and for which the ranges of the edges they emit are all, except for a finite (but nonzero) number, inside

and, for any subset
Of course a row-finite graph contains no breaking vertices, so that this concept does not play a role in the study of Leavitt path algebras arising from such graphs. Also, we note that both B E 0 and B ∅ are empty. To help clarify the concept of breaking vertex, we offer the following example.
Example 5.3. Let C N be the infinite clock graph pictured here Any subset of U is a hereditary subset of C 0 N . We note also that, since saturation applies only to regular vertices, any subset of U is saturated as well.
If There is an appropriate lattice structure which can be defined in T E so that the map ϕ is a lattice isomorphism. In addition, there is a generalization of Theorem 5.4 to the lattice of all ideals of L K (E), see [6,Theorem 2.8.10].
We close the subsection by presenting a result which is of interest in its own right (it provided a systematic approach to answering a decades-old question of Kaplansky), and which will reappear later in the context of the Rosetta Stone. An algebra A is called left primitive in case A admits a faithful simple left module. It was shown in [44] that for row-finite graphs, L K (E) is primitive if and only if E is downward directed and satisfies Condition (L). However, the extension of this result to arbitrary graphs requires an extra condition. The graph E has the Countable Separation Property in case there exists a countable set S ⊆ E 0 with the property that for every v ∈ E 0 there exists s ∈ S for which v ≥ s.

5.2.
Leavitt path algebras of separated graphs. The (CK2) condition imposed at any regular vertex in the definition of a Leavitt path algebra may be modified in various ways. Such is the motivation for the discussion in both this and the following subsection. All of these ideas appear in [31].
In the (CK2) condition which appears in the definition of the Leavitt path algebra L K (E), the edges emanating from a given regular vertex v are treated as a single entity, and the relation v = e∈s −1 (v) ee * is imposed. More generally, one may partition the set s −1 (v) into disjoint nonempty subsets, and then impose a (CK2)-type relation corresponding exactly to those subsets. More formally, a separated graph is a pair (E, C), where E is a graph, C = ⊔ v∈E 0 C v , and, for each v ∈ E 0 , C v is a partition of s −1 (v) (into pairwise disjoint nonempty subsets). (In case v is a sink, C v is taken to be the empty family of subsets of s −1 (v).) Definition 5.6. Let E be any graph and K any field. Let E denote the extended graph of E, and K E the path K-algebra of E. The Leavitt path algebra of the separated graph (E, C) with coefficients in the field K is the quotient of K E by the ideal generated by these two types of relations: (SCK1) for each X ∈ C, e * f = δ e,f r(e) for all e, f ∈ X, and (SCK2) for each non-sink v ∈ E 0 , v = e∈X ee * for every finite X ∈ C v .
So the usual Leavitt path algebra L K (E) is exactly L K (E, C), where each C v is defined to be the subset {s −1 (v)} if v is not a sink, and ∅ otherwise. Leavitt path algebras of separated graphs include a much wider class of algebras than those which arise as Leavitt path algebras in the standard construction. For instance, the algebras of the form L K (m, n) for m ≥ 2 originally studied by Leavitt in [76] do not arise as L K (E) for any graph E. On the other hand, as shown in [31,Proposition 2.12], L K (m, n) (m ≥ 2) appears as a full corner of the Leavitt path algebra of an explicitly described separated graph (having two vertices and m + n edges). In particular, L K (m, n) is Morita equivalent to the Leavitt path algebra of a separated graph.
Of significantly more importance is the following Bergman-like realization result, which shows that the collection of Leavitt path algebras of separated graphs is extremely broad. Consequently, V(L K (E, C)) need not share the separativity nor the refinement properties of the standard Leavitt path algebras L K (E). Furthermore, the ideal structure of L K (E, C) is in general significantly more complex than that of L K (E), but a description of the idempotent-generated ideals can be achieved (solely in terms of graph-theoretic information).

Cohn path algebras.
In the previous subsection we saw one way to modify the (CK2) relation, namely, by imposing it on subsets of s −1 (v) for v ∈ E 0 .
A second way to modify the (CK2) relation is to simply eliminate it.
Definition 5.8. Let E be any graph and K any field. The Cohn path algebra C K (E) is the path K-algebra K E of the extended graph of E, modulo the relation (CK1) e * f = δ e,f r(e) for each e, f ∈ E 1 .
The terminology "Cohn path algebra" postdates the Leavitt path algebra terminology, and owes to the fact that for each n ≥ 1, the algebra C K (R n ) (for R n the rose with n petals graph) is precisely the algebra U 1,n described and investigated by Cohn in [55].
Indeed, even the case n = 1 is of interest here: C K (R 1 ) is the unital K-algebra A generated by an element e for which e * e = 1 (and no other relation involving e). Thus we get that C K (R 1 ) is exactly the Jacobson algebra described in Example 2.14, so that (using the computation presented in that Example), we have C K (R 1 ) ∼ = L K (T ), the Leavitt path algebra of the Toeplitz graph. Pictorially, This isomorphism between a Cohn path algebra and a Leavitt path algebra is not a coincidence.
Theorem 5.9. [6, Section 1.5] Let E be any graph. Then there exists a graph F (which is explicitly constructed from E) for which C K (E) ∼ = L K (F ). That is, every Cohn path algebra is isomorphic to a Leavitt path algebra.
In particular, the explicit construction mentioned in Theorem 5.9 of the graph F from the graph E in case E = R 1 yields that F = T . So although at first glance the Cohn path algebra construction seems less restrictive than the Leavitt path algebra construction, the collection of algebras which arise as C K (E) is (properly) contained in the collection of algebras which arise as L K (E). (One way to see that the containment is proper is to note that the Cohn path algebra C K (E) has Invariant Basis Number for any finite graph E; see [12].) One may view the Leavitt path algebras and Cohn path algebras as occupying the opposite ends of a spectrum: in the former, we impose the (CK2) relation at all (regular) vertices, while, in the latter, we do not impose it at any of the vertices. The expected middle-ground construction may be formalized: if X is any subset of the regular vertices Reg(E) of E, then the Cohn path K-algebra relative to X, denoted C X K (E), is the algebra C K (E), modulo the (CK2) relation imposed only at the vertices v ∈ X.
(E). Theorem 5.9 generalizes appropriately from Cohn path algebras to relative Cohn path algebras.

Additional constructions.
We close this section with a description of four additional Leavittpath-algebra-inspired constructions.
Cohn-Leavitt algebras. The following (not unexpected) mixing-and-matching of the Leavitt path algebras of separated graphs with the relative Cohn path algebras has been defined and studied in [31]. Definition 5.10. Let (E, C) be a separated graph. Let C fin denote the subset of C consisting of those X for which |X| is finite. Let S be any subset of C fin . Denote by CL K (E, C, S) the quotient of the path K-algebra K E, modulo the relations (SCK1) of Definition 5.6, together with the relations (SCK2) for the sets X ∈ S. CL K (E, C, S) is called the Cohn-Leavitt algebra of the triple (E, C, S).
Kumjian-Pask algebras. Any directed graph E = (E 0 , E 1 , s, r) may be viewed as a category Γ E ; the objects of Γ E are the vertices E 0 , and, for each pair v, w ∈ E 0 , the morphism set Hom Γ E (v, w) consists of those elements of Path(E) having source v and range w. Composition is concatenation. As well, the set Z + is a category with one object, and morphisms given by the elements of Z + , where composition is addition. In this level of abstraction, the length map ℓ : Path(E) → Z + is a functor, which satisfies the following factorization property: if λ ∈ Path(E) and ℓ(λ) = m + n, then there are unique µ, ν ∈ Path(E) such that ℓ(µ) = m, ℓ(ν) = n, and λ = µν. Conversely, we may view a category as the morphisms of the category, where the objects are identified with the identity morphisms. Then any category Λ which admits a functor d : Λ → Z + having the factorization property can be viewed as a directed graph E Λ in the expected way.
With these observations as motivation, one defines a higher rank graph, as follows.
Definition 5.11. Let k be a positive integer. View the additive semigroup (Z + ) k as a category with one object, and view a category as the morphisms of the category, where the objects are identified with the identity morphisms. A graph of rank k (or simply a k-graph) is a countable category Λ, together with a functor d : Λ → (Z + ) k , which satisfies the factorization property: if λ ∈ Λ and d(λ) = m + n for some m, n ∈ (Z + ) k , then there exist unique µ, ν ∈ Λ such that d(µ) = m, d(ν) = n, and λ = µν. (So the usual notion of a graph is a 1-graph in this more general context.) Given any k-graph (Λ, d) and field K, one may define the Kumjian-Pask K-algebra KP K (Λ, d). (We omit the somewhat lengthy details of the construction; see [40] for the complete description.) In case k = 1, KP K (Λ, d) is the Leavitt path algebra L K (E Λ ).
The regular algebra of a graph. The following construction should be viewed not as a method to generalize the notion of Leavitt path algebra, but rather to use the properties of Leavitt path algebras as a tool to answer what at first glance seems to be an unrelated question. The "Realization Problem for von Neumann Regular Rings" asks whether every countable conical refinement monoid can be realized as the monoid V(R) for some von Neumann regular ring R. It was shown in [16] that the only von Neumann regular Leavitt path algebras are those associated to acyclic graphs, so it would initially seem that Leavitt path algebras would not be fertile ground in the context of the Realization Problem. Nonetheless, Ara and Brustenga developed an elegant construction which provides the key connection. Using the algebra of rational power series on E, and appropriate localization techniques (inversion), they showed how to construct a K-algebra Q K (E) with the following properties. Consequently, using the Realization Theorem (Theorem 1.15 ′ ), Theorem 5.12 yields that any monoid which arises as the graph monoid M E for a finite graph E has a positive solution to the Realization Problem. This result represented (at the time) a significant broadening of the class of monoids for which the Realization Problem had a positive solution. The result extends relatively easily to row-finite graphs (see [26,Theorem 4.3]), with the proviso that Q K (E) need not be unital in that generality.
Non-field coefficients. While nearly all of the energy expended on understand L K (E) has focused on the graph E, one may also relax the requirement that the coefficients be taken from a field K. For a commutative unital ring R and graph E one may form the path ring RE of E with coefficients in R in the expected way; it is then easy to see how to subsequently define the Leavitt path ring L R (E) of E with coefficients in R. While some of the results given when R is a field do not hold verbatim in the more general setting (e.g., the Simplicity Theorem), one can still understand much of the structure of L R (E) in terms of the properties of E and R; see e.g. [98].
With these many generalizations of Leavitt path algebras having now been noted, a comment on the extremely robust interplay between algebras and C * -algebras is in order. In some situations, the C * -ideas preceded the algebra ideas; in other situations, the opposite; and in still others, the ideas were introduced simultaneously.
• Graph C * -algebras of countable graphs which contain infinite emitters were introduced in [63] (2000); these motivated the definition of Leavitt path algebras of such graphs in [9] (2006).
• Leavitt path algebras for arbitrary graphs were first given complete consideration in [66] (2009). The initial study of C * -algebras corresponding to arbitrary graphs appears in [18] (2013), where this notion was utilized to give the first systematic construction of C * -algebras which are prime but not primitive.
• In the context of separated graphs, both Leavitt path algebras and graph C * -algebras of these objects were introduced essentially simultaneously in the articles [30] (2011), and [31] (2012).

Current lines of research in Leavitt path algebras
In the previous five sections we have given an overview of the subject of Leavitt path algebras. In this final section we consider some of the important current research problems in the field. For additional information, see "The graph algebra problem page": www.math.uh.edu/tomforde/GraphAlgebraProblems/ListOfProblems.html This website was built and is being maintained by Mark Tomforde of the University of Houston.
We have previously discussed the (currently unresolved) Rosetta Stone Question for graph algebras. More information about the Rosetta Stone is presented in Appendix 1.
6.1. The Classification Question for purely infinite simple Leavitt path algebras, a.k.a. The Algebraic Kirchberg Phillips Question. We start with what is generally agreed to be the most compelling unresolved question in the subject of Leavitt path algebras, stated concisely as: The Algebraic Kirchberg Phillips Question: Can we drop the hypothesis on the determinants in Theorem 4.8? More formally, the Algebraic KP Question is the following "Classification Question". Let E and F be finite graphs, and K any field. Suppose L K (E) and L K (F ) are purely infinite simple. If The name given to the Question derives from the previously mentioned Kirchberg Phillips Theorem for C * -algebras (see the discussion prior to Theorem 1.12), which yields as a special case that if E and F are finite graphs, and if C * (E) and C * (F ) are purely infinite simple graph C * -algebras with K 0 (C * (E)) ∼ = K 0 (C * (F )) via an isomorphism for which [C * In particular, the determinants of the appropriate matrices play no role.
Intuitively, the Question asks whether or not the integer det(I −A E ) can be "seen" or "recovered" inside L K (E) as an isomorphism invariant. There is indeed a way to interpret det(I − A E ) in terms of the cycle structure of E, see e.g. [92]; but this interpretation has not (yet?) been useful in this context.
With the Restricted Algebraic Kirchberg Phillips Theorem having been established, there are three possible answers to the Algebraic Kirchberg Phillips Question: No. That is, if the two graphs E and F have det Yes. That is, the existence of an isomorphism of the indicated type between the K 0 groups is sufficient to yield an isomorphism of the associated Leavitt path algebras, for any field K.
Sometimes. That is, for some pairs of graphs E and F , and/or for some fields K, the answer is No, and for other pairs the answer is Yes.
One of the elegant aspects of the Algebraic KP Question is that its answer will be interesting, regardless of which of the three possibilities turns out to be correct. If the answer is No, then isomorphism classes of purely infinite simple Leavitt path algebras will match exactly the flow equivalences classes of the germane set of graphs, which would suggest that there is some deeper, as-of-yet-not-understood connection between the two subjects. If the answer is Yes, this would yield further compelling evidence for the existence of a Rosetta Stone, since then the Leavitt path algebra and graph C * -algebra results would be exactly analogous. If the answer is Sometimes, then (in addition to providing quite a surprise to those of us working in the field) this would likely require the development and utilization of a completely new set of tools in the subject. (Indeed, the Sometimes answer might be the most interesting of the three.) Using a standard tool (the Smith Normal Form of an integer-valued matrix), it is not hard to show that the cardinality of the group K 0 (L K (E)) is |det(I − A E )| in case K 0 (L K (E)) is finite, and the cardinality is infinite precisely when det(I − A E ) = 0. So the Algebraic KP Question admits a somewhat more concise version: If the signs of det(I − A E ) and det(I − A F ) are different, is it the case that L K (E) ∼ = L K (F )?
The analogous question about Morita equivalence asks whether or not we can drop the determinant hypothesis from Theorem 4.7. But the two questions will have the same answer: if isomorphic K 0 groups yields Morita equivalence of the Leavitt path algebras, then the Morita equivalence together with Huang's Theorem will yield isomorphism of the algebras.
Suppose E is a finite graph for which L K (E) is purely infinite simple. There is a way to associate with E a graph E − , for which L K (E − ) is purely infinite simple, for which K 0 (L K (E)) ∼ = K 0 (L K (E − )), and for which det(I −A E ) = −det(I −A E − ). This is called the "Cuntz splice" process, which appends to a vertex v ∈ E 0 two additional vertices and six additional edges, as shown here pictorially: Although the isomorphism between K 0 (L K (E)) and K 0 (L K (E − )) need not in general send [ , the Cuntz splice process allows us an easy way to produce many specific examples of pairs of Leavitt path algebras to analyze in the context of the Algebraic KP Question. The most "basic" pair of such algebras arises from the following two graphs: Here is an alternate approach to establishing the (analytic) Kirchberg Phillips Theorem (Theorem 1.12) in the limited context of graph C * -algebras. Using the same symbolic-dynamics techniques as those used to establish Theorem 4.8, one can establish the C * -version of the Restricted Algebraic Kirchberg Phillips Theorem (i.e., one which involves the determinants). One then "crosses the determinant gap" for a single pair of algebras, by showing that C * (E 2 ) ∼ = C * (E 4 ); this is done using a powerful analytic tool (KK-theory). Finally, again using analytic tools, one shows that this one particular crossing of the determinant gap allows for the crossing of the gap for all germane pairs of graph C * -algebras. But neither KK-theory, nor the tools which yield the extension from one crossing to all crossings, seem to accommodate analogous algebraic techniques.
The pair {E 2 , E 4 } can appropriately be viewed as the "smallest" pair of graphs of interest in this context, as follows. We say a graph has Condition (Sing) in case there are no parallel edges in the graph (i.e., that the incidence matrix A E consists only of 0's and 1's). It can be shown that, up to graph isomorphism, there are 2 (resp., 34) graphs having two (resp., three) vertices, and having Condition (Sing), and for which the corresponding Leavitt path algebras are purely infinite simple. (See [4].) For each of these graphs E, det(I − A E ) ≤ 0. So finding an appropriate pair of graphs with (Sing) and with unequal (sign of the) determinant requires at least one of the two graphs to contain at least four vertices.
To the author's knowledge, no Conjecture regarding what the answer to the Algebraic KP Question should be has appeared in the literature.
6.2. The Classification Question for graphs with finitely many vertices and infinitely many edges. We consider now the collection S of those graphs E having finitely many vertices, but (countably) infinitely many edges, and for which L K (E) is (necessarily unital) purely infinite simple. The Purely Infinite Simplicity Theorem (Theorem 4.3) extends to this generality, so we can fairly easily determine whether or not a given graph E is in S. Unlike the case for finite graphs, a description of K 0 (L K (E)) for E ∈ S cannot be given in terms of the cokernel of an integervalued matrix transformation from Z |E 0 | to Z |E 0 | . Nonetheless, there is still a relatively easy way to determine K 0 (L K (E)), so that this group remains a useful player in this context.
For a graph E let Sing(E) denote the set of singular vertices of E, i.e., the set of vertices which are either sinks, or infinite emitters. Ruiz and Tomforde in [90] achieved the following. Theorem 6.1. Let E, F ∈ S. If K 0 (L K (E)) ∼ = K 0 (L K (F )) and |Sing(E)| = |Sing(F )|, then L K (E) is Morita equivalent to L K (F ).
So, while "the determinant of I − A E " is clearly not defined here in the usual sense (because at least one of the entries would be the symbol ∞), the isomorphism class of K 0 together with the number of singular vertices is enough information to determine Morita equivalence. Although this is quite striking, it is not completely satisfying, in that it remains unclear whether or not |Sing(E)| is an algebraic property of L K (E).
Continuing the search for a Classification Theorem which is cast completely in terms of algebraic properties of the underlying algebras, the authors were able to show that for a certain type of field (those with no free quotients), there is such a result. In a manner similar to the computation of K 0 (L K (E)) for E ∈ S, there is a way to easily compute K 1 (L K (E)) as well.
Theorem 6.2. [90, Theorem 7.1] Suppose E, F ∈ S, and suppose that K is a field with no free quotients. Then L K (E) is Morita equivalent to L K (F ) if and only if K 0 (L K (E)) ∼ = K 0 (L K (F )) and The collection of fields having no free quotients includes algebraically closed fields, R, finite fields, perfect fields of positive characteristic, and others. However, the field Q is not included in this list. Indeed, the authors in [90,Example 10.2] give an example of graphs E, F ∈ S for which K 0 (L Q (E)) ∼ = K 0 (L Q (F )) and K 1 (L Q (E)) ∼ = K 1 (L Q (F )), but L Q (E) is not Morita equivalent to L Q (F ). There are many open questions here. For instance, might there be an integer N for which, if K i (L K (E)) ∼ = K i (L K (F )) for all 0 ≤ i ≤ N, then L K (E) and L K (F ) are Morita equivalent for all fields K? Of note in this context is that, unlike the situation for graph C * -algebras (in which "Bott periodicity" yields that K 0 and K 1 are the only distinct K-groups), there is no analogous result for the K-groups of Leavitt path algebras. Further, although a long exact sequence for the K-groups of L K (E) has been computed in [27,Theorem 7.6], this sequence does not yield easily recognizable information about K i (L K (E)) for i ≥ 2.
Finally, a recent intriguing result presented in [65] demonstrates that, if K is a finite extension of Q, then the pair consisting of (K 0 (L K (E)), K 6 (L K (E))) provides a complete invariant for the Morita equivalence classes of Leavitt path algebras arising from graphs in S, while none of the pairs (K 0 (L K (E)), K i (L K (E))) for 1 ≤ i ≤ 5 provides such. 6.3. Graded Grothendieck groups, and the corresponding Graded Classification Question. The Algebraic Kirchberg Phillips Question, motivated by the corresponding C * -algebra result, is not the only natural classification-type question to ask in the context of Leavitt path algebras. Having in mind the importance that the Z-grading on L K (E) has been shown to play in the multiplicative structure, Hazrat in [68] has built the machinery which allows for the casting of an analogous question from the graded point of view.
There is a very well developed theory of graded modules over group-graded rings, see, e.g., [80]. (The theory is built for all groups, and is particularly robust in case the group is Z, the case of interest for Leavitt path algebras.) If A = ⊕ t∈Z A t is a Z-graded ring and M is a left A-module, then M is graded in case M = ⊕ i∈Z M i , and a t m i ∈ M t+i whenever a t ∈ A t and m i ∈ M i . If M is a Z-graded A-module, and j ∈ Z, then the suspension module M(j) is a graded A-module, for which M(j) = M as A-modules, with Z-grading given by setting M(j) i = M j+i for all i, j ∈ Z.
In a standard way, one can define the notion of a graded finitely generated projective module, and subsequently build the monoid V gr of isomorphism classes of such modules, with ⊕ as operation. If [M] ∈ V gr , then [M(j)] ∈ V gr for each j ∈ Z, which yields a Z-action on V gr , and thus by extension gives V gr the structure of a Z[x, x −1 ]-module. In a manner completely analogous to the non-graded case, one may define the graded Grothendieck groups K gr i for each i ≥ 0; the suspension operation yields a Z[x, x −1 ]-module structure on these as well.
From this graded-module point of view, one can now ask about structural information of the Z-graded K-algebra L K (E) which might be gleaned from the K gr i groups. A reasonable initial question might be to see whether the graded version of the Kirchberg Phillips Theorem holds. That is, suppose that E and F are finite graphs for which L K (E) and L K (F ) are purely infinite simple, and suppose K gr 0 (L K (E)) ∼ = K gr 0 (L K (F )) as Z[x, x −1 ]-modules, via an isomorphism which takes [L K (E)] to [L K (F )]. Is it necessarily the case that L K (E) ∼ = L K (F ) as Z-graded K-algebras?
As it turns out, the purely infinite simple hypothesis is not the natural one to start with in the graded context. In fact, Hazrat in [68] makes the following Conjecture, which at first glance might seem somewhat audacious. In [68, Theorem 4.8], Hazrat verifies Conjecture 6.3 in case the graphs E and F are polycephalic (essentially, mixtures of acyclic graphs, or graphs which can be described as "multiheaded comets" or "multiheaded roses" in which the cycles and/or roses have no exits.) As mentioned in the Historical Plot Line #1, in work that predates the introduction of the general definition of Leavitt path algebras the four authors of [29] investigated the notion of a fractional skew monoid ring, which in particular situations is denoted A[t + , t − , α]. Recast in the language of Leavitt path algebras, the discussion in [29, Example 2.5] yields that, when E is an essential graph (i.e., has no sinks or sources), then L K (E) = L K (E) 0 [t + , t − , α] for suitable elements t + , t − ∈ L K (E), and a corner-isomorphism α of the zero component L K (E) 0 .
When E is a finite graph with no sinks, then L K (E) is strongly graded ([69, Theorem 2]), which yields (by a classical theorem of Dade) that the category of graded modules over L K (E) is equivalent to the category of (all) modules over the zero component L K (E) 0 . Thus, when E has no sinks, we have reason to expect that the zero component might play a role in the graded theory. In a deep result (which relies heavily on ideas from symbolic dynamics), Ara and Pardo [37,Theorem 4.1] prove the following modified version of Conjecture 6.3. Theorem 6.4. Let E and F be finite essential graphs. Write L K (E) = L K (E) 0 [t + , t − , α] as described above. Then the following are equivalent. ( (2) There exists a locally inner automorphism g of L K (E) 0 for which as Z-graded K-algebras.
A complete resolution of Conjecture 6.3 currently remains elusive.
6.4. Connections to noncommutative algebraic geometry. One of the basic ideas of (standard) algebraic geometry is the correspondence between geometric spaces and commutative algebras. Over the past few decades, significant research energy has been focused on appropriately extending this correspondence to the noncommutative case; the resulting theory is called noncommutative algebraic geometry. 5 Suppose A is a Z + -graded algebra (i.e., a Z-graded algebra for which A n = {0} for all n < 0). Let Gr(A) denote the category of Z-graded left A-modules (with graded homomorphisms), and let Fdim(A) denote the full subcategory of Gr(A) consisting of the graded A-modules which are the sum of their finite dimensional submodules. Denote by QGr(A) the quotient category Gr(A)/Fdim(A). The category QGr(A) turns out to be one of the fundamental constructions in noncommutative algebraic geometry. In particular, if E is a directed graph, then the path algebra KE is Z + -graded in the usual way (by setting deg(v) = 0 for each vertex v, and deg(e) = 1 for each edge e), and so one may construct the category QGr(KE).
Let E nss denote the graph gotten by repeatedly removing all sinks and sources (and their incident edges) from E. Moreover, since L K (E nss ) is strongly graded, then these categories are also equivalent to the full category of modules over the zero-component (L K (E nss )) 0 .
So the Leavitt path algebra construction arises naturally in the context of noncommutative algebraic geometry. (The appearance of Leavitt path algebras in this setting is clarified by the notion of a Universal Localization, see e.g. [91].) In general, when the Z + -graded K-algebra A arises as an appropriate graded deformation of the standard polynomial ring K[x 0 , ..., x n ], then QGr(A) shares many similarities with projective nspace P n ; parallels between them have been studied extensively (see e.g. [96]). However, in general, an algebra of the form KE does not arise in this way; and for these, as asserted in [95], "it is much harder to see any geometry hiding in QGr(KE)." In specific situations there are some geometric perspectives available (see e.g. [93]), but the general case is not well understood. 6.5. Tensor products. As described in Section 3.5, the algebras L K (1, 2) ⊗ L K (1, 2) and L K (1, 2) are not isomorphic. However, the following related questions are still unresolved.
6.6. The Realization Problem for von Neumann regular rings. Although significant progress has been made in resolving the Realization Problem for von Neumann regular rings (see the discussion prior to Theorem 5.12), there is as of yet not a complete answer. An excellent survey of the main ideas relevant to this endeavor can be found in [25]. Using direct limit arguments, one can show that the graph monoid M E corresponding to a countable graph E can be realized as V(R) for a von Neumann regular algebra R. Indeed, M E is constructed as a direct limit of monoids of the form M F , where the graphs F are finite; in particular, M E is a direct limit of finitely generated refinement monoids. Furthermore, R can be constructed as a direct limit of (von Neumann regular) quotient algebras of the form Q K (F ) for F finite.
More generally, one can divide the (countable refinement) monoids arising in the Realization Problem into two types: tame (those which can be constructed as direct limits of finitely generated refinement monoids), and the others (called wild). Investigations (by Ara and Goodearl, see [32]) continue into whether or not every finitely generated refinement monoid is realizable; whether or not the realization passes to direct limits; and whether or not there are wild monoids which are not realizable.
7. Appendix 1: Some properties of L K (E) and C * (E) which suggest the existence of a Rosetta Stone It has become apparent that there is a strong, but mysterious, relationship between the structure of the Leavitt path algebra L C (E) and the corresponding graph C * -algebra C * (E). In this context it is helpful to keep in mind that while L C (E) may always be viewed as a dense * -subalgebra of C * (E) (see Proposition 2.3), the two algebras are in general clearly different as rings: indeed, they coincide only when E is finite and acyclic.
We focus in this Appendix on finite graphs, so that the corresponding Leavitt path algebra L C (E) or graph C * -algebra C * (E) is unital (and C * (E) is separable as well). But many of the observations we make here hold more generally.
Any C * -algebra A wears two hats: not only is A a ring, but A comes equipped with a topology as well, so that one may view the ring-theoretic structure of A from a topological/analytic viewpoint. The standard example is this: one may define the (algebraic) simplicity of the C * -algebra either as a ring (no nontrivial two-sided ideals), or the (topological) simplicity as a topological ring (no nontrivial closed two-sided ideals). In general, the algebraic and topological properties of a given C * -algebra A need not coincide.
The graph E is called cofinal in case every vertex of E connects to every cycle and every sink of E. (This turns out to be equivalent to E 0 having the property that the only hereditary saturated subsets of E 0 are ∅ and E 0 .) As a reminder: E has Condition (L) if every cycle in E has an exit; E has Condition (K) if there is no vertex v of E which has exactly one simple closed path based at v; and E is downward directed if for each pair of vertices v, w of E there exists a vertex y for which v ≥ y and w ≥ y. By [47,Proposition 5.1] (for the case without sources), and [86] (for the general case), C * (E) is (topologically) simple if and only if E is cofinal and has Condition (L).
By [57, p. 215], for any unital C * -algebra A, A is topologically simple if and only if A is algebraically simple.
Result: These are equivalent for any finite graph E: By [36,Theorem 3.5], the monoid V(L K (E)) is independent of the field K; specifically, V(L K (E)) ∼ = M E , the graph monoid of E.
Result: For any finite graph E and any field K, the following semigroups are isomorphic. Analytic: The simple C * -algebra A is called purely infinite (simple) if for every positive x ∈ A, the subalgebra xAx contains an infinite projection. (Source: [58, p. 186].) By [35, Theorem 1.6], (algebraic) purely infinite simplicity for unital rings is equivalent to: R is not a division ring, and for all nonzero x ∈ R there exist α, β ∈ R for which αxβ = 1.
By [50, Proposition 6.11.5], (topological) purely infinite simplicity for unital C * -algebras is equivalent to: A = C and for every x = 0 in A there exist α, β ∈ A for which αxβ = 1. (Remark: Blackadar defines purely infinite simplicity this way, and then shows this definition is equivalent to Cuntz' definition given in [58].) Easily, for any graph E, C * (E) is a division ring if and only if E is a single vertex, in which case C * (E) = C.
Thus we have, for graph C * -algebras, C * (E) is (algebraically) purely infinite simple if and only if C * (E) is (topologically) purely infinite simple.
(We note that the first three properties have been shown to be equivalent for arbitrary graphs as well, with the fourth condition being replaced by: E satisfies Condition (L), is downward directed, and has the Countable Separation Property. See Theorem 5.5 and [19].) It is interesting to note that in the situations in which we have a result which suggests the existence of a Rosetta Stone, the algebraic and topological conditions on C * (E) are identical. Perhaps there is something in this observation which will lead to a deeper understanding of why there seems to be such a strong relationship between the properties of L C (E) and C * (E).
There are indeed situations where the analogies between the Leavitt path algebras and graph C * algebras are not as tight as those presented above.
A property for which the algebraic and analytic results are not identical: Primeness Algebraic: R is a prime ring in case {0} is a prime ideal of R; that is, in case for any two- There are a few other situations where the properties of L C (E) and C * (E) do not match up exactly. For instance, the only possible values of the (algebraic) stable rank of L C (E) are 1, 2, and ∞; as well, the only possible values of the (topological) stable rank of C * (E) are 1, 2, and ∞. But among individual graphs, the values may be different: if E = R 1 , then the stable rank of L C (R 1 ) is 2, while the stable rank of C * (R 1 ) is 1.
Summary of Appendix 1: A "Rosetta Stone for graph algebras" refers to an overarching principle which would allow an understanding as to why there is such an extremely tight (but not perfect) relationship between various properties of Leavitt path algebras and graph C * -algebras, as suggested by the examples given in this section. Does such a Rosetta Stone exist?

Appendix 2: A number-theoretic observation
Let L K (1, n) denote the Leavitt algebra of order n; so R = L K (1, n) has the property that R R ∼ = R R n as left R-modules. By Theorem 3.10 we have L K (1, n) ∼ = M d (L K (1, n)) ⇔ g.c.d.(d, n − 1) = 1.
Indeed, when the appropriate number-theoretic condition is satisfied then the isomorphism may be explicitly constructed.
The key to constructing such an isomorphism lies in considering a partition of {1, 2, ..., d} into two nonempty disjoint subsets S 1 ⊔ S 2 , described as follows.
For the current discussion we focus only on r. Note we have r ≥ 1. Let s = d − (r − 1). Since g.c.d.(d, r − 1) = 1 we easily see g.c.d.(d, s) = 1. Now consider the sequence Σ d,r , given by Σ d,r = 1, 1 + s, 1 + 2s, ..., 1 + (d − 1)s of d integers, interpreted mod d. (Here we interpret 0 mod d as d mod d.) Since g.c.d.(d, s) = 1, elementary number theory gives that, as a set, the elements of Σ d,r form a complete set of residues mod d.
In particular, for some i r (1 ≤ i r ≤ d) we have 1 + (i r − 1)s ≡ r − 1 mod d.
(1) For fixed d, r having g.c.d.(d, r − 1) = 1, do the sequences Σ d,r 1 and Σ d,r 2 arise in contexts other than that of isomorphisms between matrix rings over Leavitt algebras?
Specifically, in the proof of Theorem 3.10, the ordering properties of the sequences Σ 1 and Σ 2 are not utilized, rather, only the partition S d,r 1 ⊔ S d,r 2 of {1, 2, ..., d} as sets is used.

Appendix 3: The graph moves
We give in this Appendix the formal definitions of each of the six "graph moves" which arise in the symbolic dynamics analysis associated to the Restricted Algebraic Kirchberg Phillips Theorem. We conclude by presenting the "source elimination" process as well.