Implicit ComponentGraph: A Discussion
Abstract
Componentgraphs are defined as the generalization of componenttrees to images taking their values in partially ordered sets. Similarly to componenttrees, componentgraphs are a lossless image model, and can allow for the development of various image processing approaches (antiextensive filtering, segmentation by node selection). However, componentgraphs are not trees, but directed acyclic graphs. This induces a structural complexity associated to a higher combinatorial cost. These properties make the handling of componentgraphs a nontrivial task. We propose a preliminary discussion on a new way of building and manipulating componentgraphs, with the purpose of reaching reasonable space and time costs. Tackling these complexity issues is indeed required for actually involving componentgraphs in efficient image processing approaches.
Keywords
Componentgraph Algorithmics Datastructure1 Introduction
In mathematical morphology, connected operators [1] are often defined from hierarchical image models, namely trees, that represent an image by considering simultaneously its spatial and spectral properties. Among the most popular tree structures modelling images, one can cite the componenttree (CT) [2], the tree of shapes (ToS) [3], and the binary partition tree (BPT) [4]. The CT and ToS —by contrast with the BPT, that derives from both an image and extrinsic criteria— are intrinsic models, that depend only on the image pixel values, and are built in a deterministic way.
Initially, the CT and the ToS (which can be seen as an autodual version of the CT) were defined for greylevel images, i.e. images whose values are totally ordered. Different ways to extend the notions of CT and ToS to multivalued images, i.e. images whose values are partially ordered, were investigated during the last years. The purpose was, in particular, to allow for the use of such image models in a wider range of image processing applications. The first attempt of extending the notion of CT to partiallyordered values was proposed in [5], leading to the pioneering notion of componentgraph (CG). The main structural properties of CGs were further established [6]. From an algorithmic point of view, first strategies for building CGs were investigated. Except in a specific case —where the partially ordered set of values is structured itself as a tree [7]— these first attempts emphasised the high computational cost of the construction process, and the high spatial cost of the directed acyclic graph (DAG) explicitly modelling a CG [8, 9]. By relaxing certain constraints, leading to an improved complexity, the notion of CG was successfully involved in the development of an efficient extension of ToS to multivalued images [10]. From an applicative point of view, the notion of CG, coupled with the recent notion of shaping [11], also led to preliminary, yet promising, results for multimodal image processing [12].
We propose a discussion on a new way of building and manipulating CGs. A CG is complex due to its size (number of nodes and edges) and its structural complexity (as a DAG), which induce high space and time computational costs. Our strategy is to take advantage of the structural properties of a CG in order to build dedicated datastructures gathering information useful for handling it, without explicitly computing its whole DAG. We then hope to obtain a reasonable tradeoff between the cost of modelling the CG, and the cost of navigating within it for further image processing purposes.
The following discussion is mainly theoretical, providing preliminary ideas; in particular, we will present neither experimental studies nor application cases. The remainder of the paper is organized as follows. Sections 2 and 3 provide notations and recall basic notions on CGs. In Sect. 4, we enumerate the different functionalities that should be offered by implicit CG datastructures. Sections 5–11 constitute the core of the paper, where we develop our discussion. Section 12 provides concluding remarks.
2 Notations
The used notations are the same as in [6, 7, 8, 9]. We only recall here the nonstandard ones.
For any symbol further used to denote an order relation (\(\subseteq \), \(\leqslant \), \(\trianglelefteq \), etc.), the inverse symbol (\(\supseteq \), \(\geqslant \), \(\trianglerighteq \), etc.) denotes the associated dual order, while the symbol without lower bar (\(\subset \), <, \(\vartriangleleft \), etc.) denotes the associated strict order.
If \((X,\leqslant )\) is an ordered set and \(x \in X\), we note \(x^\uparrow = \{y \in X \mid y \geqslant x\}\) and \(x^\downarrow = \{y \in X \mid y \leqslant x\}\), namely the sets of the elements greater and lower than x, respectively. If \(Y \subseteq X\), the sets of all the maximal and minimal elements of Y are noted \(\bigtriangledown ^\leqslant Y\) and \(\bigtriangleup ^\leqslant Y\), respectively.
3 Basic Notions on ComponentGraphs
Let \(\varOmega \) be a nonempty finite set. Let \(V\) be a nonempty finite set equipped with an order (i.e. reflexive, transitive, antisymmetric) relation \(\leqslant \). We assume that \((V,\leqslant )\) admits a minimum, noted \(\bot \).
Definition 1
We then have the following definition for the componentgraphs.
Definition 2
(Componentgraph [5, 6]). The \(\varTheta \) componentgraph (or simply, the componentgraph) of I is the Hasse diagram \({\mathfrak {G}}= (\varTheta , \blacktriangleleft )\) of the ordered set \((\varTheta , \trianglelefteq )\). The elements of \(\varTheta \) are called nodes; the elements of \(\blacktriangleleft \) are called edges; \((\varOmega ,\bot )\) is called the root; the elements of \(\bigtriangleup ^{\trianglelefteq } \varTheta \) are called the leaves of the componentgraph.
The componentgraph \({\mathfrak {G}}\) of I is then the Hasse diagram of the ordered set \((\varTheta , \trianglelefteq )\) (see Fig. 1, for an example). Two other (simpler) variants^{1} of CGs were also proposed in [6]. They will not be considered in this study for the sake of concision.
4 Problem Statement
 (i)
Which are the nodes of \({\mathfrak {G}}\) (i.e. how to identify them)?
 (ii)
What is a node of \({\mathfrak {G}}\) (i.e. what are its support and its value)?
 (iii)
Is a node of \({\mathfrak {G}}\) lower, greater, or noncomparable to another, with respect to \(\trianglelefteq \)?
The following sections aim at discussing about the construction of ad hoc datastructures that could allow for answering these questions.
5 Flat Zones and Leaves
5.1 Flat Zone Image
Let \(x,y \in \varOmega \) be two points of the image I. If x and y are adjacent, i.e. \(x \smallfrown y\), and share the same value, i.e. \(I(x) = I(y)\), then it is plain that they belong the same valued connected components of \({\mathfrak {G}}\), i.e. for any \(K = (X,v) \in \varTheta \), we have \(x \in X\) iff \(y \in X\). As an immediate corollary, the componentgraph obtained from I is isomorphic to the componentgraph of the flat zone image^{2} associated to I.
A linear time cost \({\mathcal {O}}(\mid \smallfrown \mid )\) flat zone computation can then allow us to simplify the image I into its flat zone analogue, thus reducing the space complexity of the image. From now on, and without loss of generality, we will work on such flat zone images, still noted \(I : \varOmega \rightarrow V\) for the sake of readability, and since they are indeed equivalent for CG building. In particular, under this hypothesis, we have the following property.
Property 3
Let \(x,y \in \varOmega \). If x and y are adjacent, then their values are distinct, i.e. \(x \smallfrown y \Rightarrow I(x) \ne I(y)\).
5.2 Detection of the Leaves
Since \(\varOmega \) is finite and \(\leqslant \) is antisymmetric, there exist \(n \ge 1\) points \(x \in \varOmega \) such that for all \(y \smallfrown x\), we have \(I(x) \not < I(y)\), i.e. \(I(x) \in V\) is a locally maximal value of the image I. Then, it is plain that \(K_x = (\{x\},I(x))\) is a node of \({\mathfrak {G}}\), i.e. \(K_x \in \varTheta \). More precisely, \(K_x\) is a minimal element of \({\mathfrak {G}}\), i.e. \(K_x \in \bigtriangleup ^{\trianglelefteq } \varTheta \), and is then a leaf of \({\mathfrak {G}}\) (see Definition 2). As an example, the leaves of the componentgraph \({\mathfrak {G}}\) depicted on Fig. 1(c) correspond to the nodes I, S, R, and P.
The characterization of the leaves relying on a local criterion, we can then detect all of them by an exhaustive scanning of \(\varOmega \), with a linear time cost^{3} \({\mathcal {O}}(\mid \smallfrown \mid )\). In the sequel, we will denote by \(\varLambda \subseteq \varOmega \) the set of all the points of \(\varOmega \) that correspond to supports of leaves. In other words, we have \(\bigtriangleup ^{\trianglelefteq } \varTheta = \{K_x = (\{x\}, I(x)) \mid x \in \varLambda \}\).
6 Node Encoding
Let \(K = (X,v) \in \varTheta \) be a node of \({\mathfrak {G}}\). If K is not a leaf, i.e. \(K \notin \bigtriangleup ^{\trianglelefteq } \varTheta \), then there exists a leaf \(K_x = (\{x\},I(x)) \in \bigtriangleup ^{\trianglelefteq } \varTheta \) such that \(K_x \trianglelefteq K\). In particular, Eq. (4) implies that \(x \in X\) and \(I(x) > v\). In [6, Property 1], we observed that for a given leaf \(K_x = (\{x\},I(x)) \in \bigtriangleup ^{\trianglelefteq } \varTheta \), and for any value \(v \leqslant I(x)\), there exists exactly one node \(K = (X,v) \in \varTheta \) such that \(x \in X\).
The function \(\kappa \) is obviously surjective^{5}: each node \(K = (X,v) \in \varTheta \) corresponds to a couple \((x,v) \in \varLambda \times V\) such that \(\kappa ((x,v)) = K\). This justifies the following property.
Property 4
The set \(\varTheta \) of the nodes of \({\mathfrak {G}}\) is defined as \(\varTheta = \kappa (\varLambda \times V) \setminus \{{K_\top }\}\).
However, \(\kappa \) is generally not injective. It is then necessary to handle the synonymy of nodes with respect to \(\kappa \).
7 Node Synonymy Handling
Let \(K_1 = (X_1, v_1), K_2 = (X_2,v_2) \in \varTheta \) be nodes of \({\mathfrak {G}}\). From Property 4, there exist \(x_1, x_2 \in \varLambda \) such that \(\kappa ((x_1,v_1)) = K_1\) and \(\kappa ((x_2,v_2)) = K_2\). If \(v_1 \ne v_2\), it is plain that \(K_1 \ne K_2\). However, when \(v_1 = v_2\), we may have \(x_1 \ne x_2\) while \(K_1 = K_2\), i.e. \(X_1 = X_2\).
In other words, a node \(K = (X,v) \in \varTheta \) —and more precisely its support X— can be represented by any point \(x \in \varLambda \cap X\), that also corresponds to the support of a leaf \(K_x = (\{x\},I(x)) \in \bigtriangleup ^{\trianglelefteq } \varTheta \). In order to provide an actual modelling of the nodes of \({\mathfrak {G}}\) from \(\kappa \), we then have to gather the elements of \(\varLambda \times V\) that encode a same node.
It is plain that for any \((x,v) \in \varLambda \times V\), the equivalence class \([(x,v)]_\sim \) is composed of elements \((x',v)\) with different \(x' \in \varLambda \), but a same value v. Then, it is possible to partition the set of equivalence classes of \(\sim \) with respect to the different values of V. Indeed, for any \(v \in V\), we can define the equivalence relation \(\sim _v\) on \(\varLambda \) as \(x_1 \sim _v x_2 \Leftrightarrow \kappa ((x_1,v)) = \kappa ((x_2,v))\). In particular, we have \([(x,v)]_\sim = [x]_{\sim _v} \times \{v\}\).
Property 5
The characterization of \(\sim \) only requires to define, for each \(v \in V\), the relations \(x \sim _v y\) such that for \(\forall v' > v, x \not \sim _{v'} y\).
Practically, this property states that the issue of synonymy between nodes could take advantage of the monotony of \(\sim _v\) (\(v \in V\)) with respect to \(\leqslant \), to avoid the storage of information already carried by the structure of \((V,\leqslant )\) (see Sect. 12).
The next step is now to define a way to actually build these equivalence classes.
8 Reachable Zones
In order to build the equivalence relation \(\sim \), let us come back to the image I and its adjacency graph \((\varOmega ,\smallfrown )\). Each point \(x \in \varOmega \) has a given value \(I(x) \in V\). It is also adjacent to other points \(y \in \varOmega \) (i.e. \(x \smallfrown y\)) with values I(y). From Property 3, we have either \(I(x) < I(y)\) or \(I(x) > I(y)\) or I(x), I(y) noncomparable.
From a point \(x \in \varLambda \), we can reach certain points \(y \in \varOmega \) by a descent paradigm. More precisely, for such points y, there exists a path \(x = x_0 \smallfrown \ldots \smallfrown x_i \smallfrown \ldots \smallfrown x_t = y\) (\(t \ge 0\)) in \(\varOmega \) such that for any \(i \in [\![0, t1]\!]\), we have \(I(x_i) > I(x_{i+1})\). In such case, we note \(x \searrow y\). This leads to the following notion of a reachable zone.
Definition 6
(Reachable zone). Let \(x \in \varLambda \). The reachable zone of x (in I) is the set \(\rho (x) = \{y \in \varOmega \mid x \searrow y\}\).
When a point \(y \in \varOmega \) belongs to the reachable zone \(\rho (x)\) of a point \(x \in \varLambda \), the adjacency path from x to y and the > relation between its successive points imply that the node \((Y,I(y)) \in \varTheta \) with \(y \in Y\) satisfies \(x \in Y\). This justifies the following property.
Property 7
This property states that the supports of the nodes \(K = (X,v) \in \varTheta \) of \({\mathfrak {G}}\) can be partially computed from the reachable zones of the points of \(\varLambda \). These computed supports may be yet incomplete, since X can lie within the reachable zones of several points of \(\varLambda \) within a same equivalence class \([x]_{\sim _v}\).
However, the following property, that derives from Definition 6, guarantees that no point of \(\varOmega \) will be omitted when computing the nodes of \(\varTheta \) from unions of reachable zones.
Property 8
The set \(\{\rho (x) \mid x \in \varLambda \}\) is a cover of \(\varOmega \).
The important fact of this property is that \(\bigcup _{x \in \varLambda } \rho (x) = \varOmega \). However, the set of reachable zones is not a partition of \(\varOmega \), in general, due to possible overlaps. For instance, let \(x_1, x_2 \in \varLambda \), \(y \in \varOmega \), and let us suppose that \(x_1 \smallfrown y \smallfrown x_2\) and \(x_1 > y < x_2\); then we have \(y \in \rho (x_1) \cap \rho (x_2) \ne \emptyset \).
Remark 9
The computation of the reachable zones can be carried out by a seeded regiongrowing, by considering \(\varLambda \) as set of seeds. The time cost is outputdependent, since it is linear with respect to the ratio of overlap between the different reachable zones. More precisely, it is \({\mathcal {O}}(\sum _{x \in \varLambda }\rho (x)) = {\mathcal {O}}((1+\gamma ).\varOmega )\), with \(\gamma \in [0,\varOmega /4] \subset {\mathbb {R}}\) the overlap ratio^{6}, varying between 0 (no overlap between reachable zones, i.e. \(\{\rho (x) \mid x \in \varLambda \}\) is a partition of \(\varOmega \)) and \(\varOmega /4\) (all reachable zones are maximally overlapped).
The fact that two reachable zones may be adjacent and / or overlap will allow us to complete the construction of the nodes \(K \in \varTheta \) of \({\mathfrak {G}}\).
9 Reachable Zone Graph
9.1 Reachable Zone Adjacency
When two reachable zones are adjacent —a fortiori when they overlap— they contribute to the definition of common supports for nodes of the componentgraph.
Let \(x_1, x_2 \in \varLambda \) (\(x_1 \ne x_2\)), and let us consider their reachable zones \(\rho (x_1)\) and \(\rho (x_2)\). Let \(y_1 \in \rho (x_1)\) and \(y_2 \in \rho (x_2)\) be two points such that \(y_1 \smallfrown y_2\). Let us consider a value \(v \in V\) such that \(v \leqslant I(y_1)\) and \(v \leqslant I(y_2)\). Since we have \(y_1 \smallfrown y_2\), and from the very definition of reachable zones (Definition 6), there exists a path \(x_1 = z_0 \smallfrown \ldots \smallfrown z_i \smallfrown \ldots \smallfrown z_t = x_2\) (\(t \ge 1\)) within \(\rho (x_1) \cup \rho (x_2)\) such that \(v \leqslant I(z_i)\) for any \(i \in [\![0,t]\!]\). This justifies the following property.
Property 10
Let \(x_1, x_2 \in \varLambda \) with \(x_1 \ne x_2\). Let us suppose that there exists \(y_1 \smallfrown y_2\) with \(y_1 \in \rho (x_1)\) and \(y_2 \in \rho (x_2)\). Then, for any \(v \in V\) such that \(v \leqslant I(y_1)\) and \(v \leqslant I(y_2)\), we have \(\kappa ((x_1,v)) = \kappa ((x_2,v))\), i.e. \((x_1,v) \sim (x_2,v)\), i.e. \(x_1 \sim _v x_2\).
This property provides a way to build the equivalence classes of \(\sim _v\) (\(v \in V\)) and then \(\sim \). In particular, we can derive a notion of adjacency between the points of \(\varLambda \) or, equivalently, between their reachable zones, leading to a notion of reachable zone graph.
Definition 11
9.2 Reachable Zone Graph Valuation
The structural description provided by the reachable zone graph \({\mathfrak {R}}\) of I is necessary, but not sufficient for building the equivalence classes of \(\sim \), and then actually building the nodes of the componentgraph \({\mathfrak {G}}\) via the function \(\kappa \). Indeed, as stated in Property 10, we also need information about the values of V associated to the adjacency links. This leads us to define a notion of valuation on \(\smallfrown _L\) or, more generally^{7}, on \(\varLambda \times \varLambda \).
Definition 12
Remark 13
For any \(x_1, x_2 \in \varLambda \), we have \(\sigma ((x_1,x_1)) = \emptyset \) and \(\sigma ((x_1,x_2)) = \sigma ((x_2,x_1))\).
Actually, the definition of \(\sigma \) only requires to consider the couples of points located at the borders of reachable zones, as stated by the following reasoning. Let \(x_1, x_2 \in \varLambda \) (\(x_1 \ne x_2\)). Let \(v \in \sigma ((x_1,x_2))\). Then, there exist \(y_1 \in \rho (x_1)\) and \(y_2 \in \rho (x_2)\) with \(y_1 \smallfrown y_2\), such that \(I(y_1) \geqslant v \leqslant I(y_2)\). Now, let us assume that we also have \(y_1 \in \rho (x_2)\) and \(y_2 \in \rho (x_1)\), i.e. \(y_1, y_2 \in \rho (x_1) \cap \rho (x_2)\). There exists a path \(x_1 = z_0 \smallfrown \ldots \smallfrown z_i \smallfrown \ldots \smallfrown z_t = y_1\) (\(t \ge 1\)) within \(\rho (x_1)\), with \(I(z_i) > I(z_{i+1})\) for any \(i \in [\![0,t1]\!]\). Let \(j = \max \{i \in [\![0,t]\!] \mid z_i \notin \rho (x_2)\}\) (we necessarily have \(j < t\)). Let \(v' = I(z_{j+1})\). By construction, we have \((z_j,z_{j+1}) \in \rho (x_1) \times \rho (x_2)\), \(z_j \smallfrown z_{j+1}\) and \(I(z_j) \geqslant v' \geqslant v \leqslant v' \leqslant I(z_{j+1})\). Then v could be characterized by considering a couple of points \((z_j,z_{j+1})\) with \(z_j \in \rho (x_1) \setminus \rho (x_2)\) and \(z_{j+1} \in \rho (x_2)\), i.e. with \(z_{j+1}\) at the border of \(\rho (x_2)\). This justifies the following property.
Property 14
Let \(x_1, x_2 \in \varLambda \). Let \(v \in V\). We have \(v \in \sigma ((x_1,x_2))\) iff there exists \(y_1 \in \rho (x_1) \setminus \rho (x_2)\) and \(y_2 \in \rho (x_2)\) such that \(y_1 \smallfrown y_2\) and \(I(y_1) \geqslant v \leqslant I(y_2)\).
Based on this property, the construction of the sets of values \(\sigma ((x_1,x_2)) \in 2^V\) for each couple \((x_1,x_2) \in \varLambda \times \varLambda \) can be performed by considering only a reduced subset of edges of \(\smallfrown \), namely the couples \(y_1 \smallfrown y_2\) such that \(y_1 \in \rho (x_1) \setminus \rho (x_2)\) and \(y_2 \in \rho (x_2)\), i.e. the border of the reachable zone of \(x_2\) adjacent to / within the reachable zone of \(x_1\). In particular, these specific edges can be identified during the computation of the reachable zones, without increasing the time complexity of this step.
Now, let us focus on the determination of the values \(v \in V\) to be stored in \(\sigma ((x_1,x_2))\), when identifying a couple \(y_1 \smallfrown y_2\) such that \(y_1 \in \rho (x_1) \setminus \rho (x_2)\) and \(y_2 \in \rho (x_2)\). Practically, two cases, can occur. Case 1: \(I(y_1) > I(y_2)\), i.e. \(y_2 \in \rho (x_1) \cap \rho (x_2)\). Then all the values \(v \leqslant I(y_2)\) are such that \(I(y_1) \geqslant v \leqslant I(y_2)\); in other words, we have \(I(y_2)^\downarrow \subseteq \sigma ((x_1,x_2))\). Case 2: \(I(y_2)\) and \(I(y_1)\) are noncomparable, i.e. \(y_2 \in \rho (x_2) \setminus \rho (x_1)\). Then, the values that belong to \(\sigma ((x_1,x_2))\) with respect to this specific edge are those simultaneously below \(I(y_1)\) and \(I(y_2)\); in other words, we have \((I(y_1)^\downarrow \cap I(y_2)^\downarrow ) \subseteq \sigma ((x_1,x_2))\).
Remark 15
Instead of exhaustively computing the whole set of values \(\sigma ((x_1,x_2)) \in 2^V\) associated to the edge \(x_1 \smallfrown _\varLambda x_2\), it is sufficient to determine the maximal values of such a subset of V. We can then consider the function \(\sigma _\triangledown : \varLambda \times \varLambda \rightarrow 2^{V}\), associated to \(\sigma \), and defined by \(\sigma _\triangledown ((x_1,x_2)) = \bigtriangledown ^\leqslant \sigma ((x_1,x_2))\). In particular for any \(x_1 \smallfrown _\varLambda x_2\), we have \(v \in \sigma ((x_1,x_2))\) iff there exists \(v' \in \sigma _\triangledown ((x_1,x_2))\) such that \(v \leqslant v'\).
10 Algorithmic Sketch
Based on the above discussion, we now recall the main steps of the algorithmic process for the construction of datastructures allowing us to handle a componentgraph.
10.1 Input / Output
The input of the algorithm is an image I, i.e. a function \(I : \varOmega \rightarrow V\), defined on a graph \((\varOmega ,\smallfrown )\) and taking its values in an ordered set \((V,\leqslant )\). Note that I is first preprocessed in order to obtain a flat zone image (see Sect. 5.1); this step presents no algorithmic difficulty and allows for reducing the space cost of the image without altering the structure of the componentgraph to be further built.

the ordered set \((V,\leqslant )\) (and / or its Hasse diagram \((V,\prec )\));

the initial graph \((\varOmega ,\smallfrown )\), equipped with the function \(I : \varOmega \rightarrow V\);

the set of leaves \(\varLambda \subseteq \varOmega \);

the function \(\rho : \varLambda \rightarrow 2^\varOmega \) that maps each leaf to its reachable zone (and / or its “inverse” function \(\rho ^{1} : \varOmega \rightarrow 2^\varLambda \) that indicates, for each point of \(\varOmega \), the reachable zone(s) where it lies);

the reachable zone graph \({\mathfrak {R}}= (\varLambda , \smallfrown _\varLambda )\);

the valuation function \(\sigma : \varLambda \times \varLambda \rightarrow 2^V\) (practically, \(\sigma :\ \smallfrown _\varLambda \ \rightarrow 2^V\)) or its “compact” version \(\sigma _\triangledown \).
10.2 Algorithmics and Complexity
The initial graph \((\varOmega ,\smallfrown )\) and the ordered set \((V,\leqslant )\) are provided as input. The space cost of \((\varOmega ,\smallfrown )\) is \({\mathcal {O}}(\varOmega )\) (we assume that \(\mathord {\smallfrown } = {\mathcal {O}}(\varOmega )\)). The space cost of \((V,\leqslant )\) is \({\mathcal {O}}(V)\) (by modelling \(\leqslant \) via its Hasse diagram, i.e. its transitive reduction, we can assume that \(\mathord {\leqslant } = {\mathcal {O}}(V)\)). The space cost of I is \({\mathcal {O}}(\varOmega )\).
As stated in Sect. 5.2, the set of leaves \(\varLambda \subseteq \varOmega \) is computed by a simple scanning of \(\varOmega \), with respect to the image I. This process has a time cost \({\mathcal {O}}(\varOmega )\). The space cost of \(\varLambda \) is \({\mathcal {O}}(\varOmega )\) (with, in general, \(\varLambda  \ll \varOmega \)).
As stated in Sect. 8, the function \(\rho \) (and equivalently, \(\rho ^{1}\)) is computed by a seeded regiongrowing. This process has a time cost which is outputdependent, and in particular equal to the space cost of the output; this cost is \({\mathcal {O}}(\varOmega ^\alpha )\) with \(\alpha \in [1,2]\). One may expect that, for standard images, we have \(\alpha \simeq 1\).
The adjacency \(\smallfrown _\varLambda \) can be computed on the fly, during the construction of the reachable zones. By assuming that, at a given time, the reachable zones of \(\varLambda ^+\) are already constructed while those of \(\varLambda ^\) are not yet (\(\varLambda ^+ \cup \varLambda ^ = \varLambda \)), when building the reachable zone of \(x \in \varLambda ^\), if a point \(y \in \varOmega \) is added to \(\rho (x)\), we add the couple \((x,x')\) to \(\smallfrown _\varLambda \) if there exists \(z \in \rho (x')\) in the neighbourhood of y, i.e. \(z \smallfrown y\). This process induces no extra time cost to that of the above seeded regiongrowing. The space cost of \(\smallfrown _\varLambda \) is \({\mathcal {O}}(\varLambda ^\beta )\) with \(\beta \in [1,2]\). One may expect that, for standard images, we have \(\beta \simeq 1\).
The valuation function \(\sigma : \varLambda \times \varLambda \rightarrow 2^V\) is built by scanning the adjacency links of \(\smallfrown \) located on the borders of the reachable zones^{8}. This subset of adjacency links has a space cost \({\mathcal {O}}(\varOmega ^\delta )\), with \(\delta \in (0,1]\). One may expect that \(\delta \ll 1\), due to the fact that we consider borders of regions^{9}. For each adjacency link of this subset of \(\smallfrown \), one or several values (their number can be expected to be, in average, a low constant value k) are stored in \(\sigma \). This whole storage then has a time cost \({\mathcal {O}}(k.\varOmega ^\delta )\). Then, some of these values are removed, since they do not belong to \(\sigma _\triangledown \). If we assume that we are able to compare the values of \((V,\leqslant )\) in constant time^{10} \({\mathcal {O}}(1)\), this removal process, that finally leads to \(\sigma _\triangledown \), has in average, a time cost of \({\mathcal {O}}((k.\varOmega ^\delta )^2 / \varLambda ^\beta )\). The space cost of \(\sigma _\triangledown \) is \({\mathcal {O}}(k.\varOmega ^\delta )\).
11 DataStructure Manipulation
Once all these datastructures are built, it is possible to use them in order to manipulate an implicit model of componentgraph. In particular, let us come back to the three questions stated in Sect. 4, that should be easily answered for an efficient use of componentgraphs.
Which are the nodes of \({\mathfrak {G}}\) (i.e. how to identify them)? A node \(K = (X,v) \in \varTheta \) can be identified by a leave \(K_x \in \bigtriangleup ^{\trianglelefteq } \varTheta \) (i.e. a flat zone \(x \in \varLambda \)) and its value \(v \in V\), by considering the \(\kappa \) encoding, that is “\(\kappa ((x,v))\), the node of value v whose support contains the flat zone x”. More generally, since the \(\rho ^{1}\) function provides the set of reachable zones where each point of \(\varOmega \) lies, it is also possible to identify the node \(K = (X,v) \in \varTheta \) by any point of X and its value v, that is “the node of value v whose support contains x”.
What is a node of \({\mathfrak {G}}\) (i.e. what are its support and its value)? A node \(K = (X,v) \in \varTheta \) is identified by at least one of the flat zones \(x \in \varLambda \) within its support X, and its value v. The access to v is then immediate. By contrast, the set X is constructed on the fly, by first computing the set of all the leaves forming the connected component \(\varLambda _v^x \ni x\) of the thresholded graph \((\varLambda ,\lambda _v(\smallfrown _\varLambda ))\), where \(\lambda _v(\smallfrown _\varLambda ) = \{x_1 \smallfrown _\varLambda x_2 \mid v \in \sigma ((x_1,x_2))\}\). This process indeed corresponds to a seeded regiongrowing on a multivalued map. Once the subset of leaves \(\varLambda _v^x \subseteq \varLambda \) is obtained, the support X is computed as the union of the threshold sets of the corresponding reachable zones, i.e. \(X = \bigcup _{y \in \varLambda _v^x} \lambda _v(I_{\rho (y)})\).
Is a node of \({\mathfrak {G}}\) lower, greater, or noncomparable to another, with respect to \(\trianglelefteq \) ? Let us consider two nodes defined as \(K_1 = \kappa (x_1,v_1)\) and \(K_2 = \kappa (x_2,v_2)\). Let us first suppose that \(v_1 \leqslant v_2\). Then, two cases can occur: (1) \(\kappa (x_1,v_1) = \kappa (x_2,v_1)\) or (2) \(\kappa (x_1,v_1) \ne \kappa (x_2,v_1)\). In case 1, we have \(K_2 \trianglelefteq K_1\); in case 2, they are noncomparable. To decide between these two cases, it is indeed sufficient to compute —as above— the set of all the leaves forming the connected component \(\varLambda _{v_1}^{x_1}\) of the thresholded graph \((\varLambda ,\lambda _{v_1}(\smallfrown _\varLambda ))\), and to check if \(x_2 \in \varLambda _{v_1}^{x_1}\) to conclude. Let us now suppose that \(v_1\) and \(v_2\) are noncomparable. A necessary condition for having \(K_2 \trianglelefteq K_1\) is that \(\varLambda _{v_2}^{x_2} \subseteq \varLambda _{v_1}^{x_1}\). We first compute these two sets; if the inclusion is not satisfied, then \(K_1\) and \(K_2\) are noncomparable, or \(K_1 \trianglelefteq K_2\). If the inclusion is satisfied, we have to check, for each leaf \(x \in \varLambda _{v_2}^{x_2}\), whether \(\lambda _{v_2}(I) \cap \rho (x_2) \subseteq \lambda _{v_1}(I) \cap \rho (x_1)\); this iterative process can be interrupted as soon as a negative answer is obtained. If all these inclusions are finally satisfied, then we have \(K_2 \trianglelefteq K_1\).
Remark 16
By contrast with a standard componentgraph —that explicitly models all the nodes \(\varTheta \) and edges \(\blacktriangleleft \) of the Hasse diagram— we provide here an implicit representation of \({\mathfrak {G}}\) that requires to compute on the fly certain information. This computation is however reduced as much as possible, by subdividing the support of the nodes as reachable regions, and manipulating them in a symbolic way, i.e. via their associated leaves and the induced reachable region graph, whenever a —more costly— handling of the “real” regions of \(\varOmega \) is not required. Indeed, the actual use of these regions of \(\varOmega \) is considered only when spatial information is mandatory, or when the comparison between nodes no longer depends on information related to the value part of \(\trianglelefteq \), i.e. \(\leqslant \), but to its spatial part, i.e. \(\subseteq \) (see Eq. (4)).
12 Concluding Remarks
This paper has presented a preliminary discussion about a way to handle componentgraphs without explicitly building them. In previous works, we had observed that the computation of a whole componentgraph led not only to a high time cost, but also to a datastructure whose space cost forbade an efficient use for image processing. Based on these considerations, our purpose was here to build, with a reasonable time cost, some datastructures of reasonable space cost, allowing us to manipulate an implicit model of componentgraph, in particular by computing on the fly, the required information.
This is a work in progress, and we do not yet pretend that the proposed solutions are relevant. We think that the above properties are correct, and that the proposed algorithmic processes lead to the expected results (implementation in progress). At this stage, our main uncertainty is related to the real space and time cost of these datastructures, their construction and their handling. It is plain that these costs are input/outputdependent, and in particular correlated with the nature of the order \(\leqslant \), and the size of the value set V. Further experiments will aim at clarifying these points.
Our perspective works are related to (i) the opportunities offered by our paradigm to take advantage of distributed algorithmics (in particular via the notion of reachable zones, that subdivide \(\varOmega \)); (ii) the development of a cache datastructure (e.g., based on the Hasse diagram \((V,\prec )\)), in order to reuse some information computed on the fly, taking advantage in particular of Property 5; and (iii) investigation of the links between this approach and the concepts developed on directed graphs [13], of interest with respect to the notion of descending path used to build reachable zones.
Footnotes
 1.
These two variants, called \(\dot{\varTheta }\) and \({\ddot{\varTheta }}\)componentgraphs, rely on sets of nodes defined as subsets of \(\varTheta \). A \({\ddot{\varTheta }}\)component graph only contains nodes that are necessary to model I in a lossless way, while the \(\dot{\varTheta }\)componentgraph contains nodes with maximal values for a given support.
 2.
The image where each flat zone (i.e. maximal connected region of constant value) is replaced by a single point, and where the adjacency relation between these flat zones is inherited from \(\smallfrown \) (i.e. two flat zones are adjacent iff one point of the first is adjacent to one point of the second).
 3.
In digital imaging, we have \(\mid \smallfrown \mid \ = {\mathcal {O}} (\varOmega )\) (e.g. with 4, 8, 6 and 26adjacencies on \({\mathbb {Z}}^2\) and \({\mathbb {Z}}^3\), we have \(\mid \smallfrown \mid \ \simeq k.\varOmega \), with \(k = 2\), 4, 3 and 13, respectively), and this generally still holds for the induced flat zone images. Under such hypotheses, the detection of leaves can be performed in linear time \({\mathcal {O}}(\varOmega )\) with respect to the size of the image. For the sake of concision, we will assume, from now on, that we have indeed \(\mid \smallfrown \mid \ = {\mathcal {O}} (\varOmega )\).
 4.
By convention, we define a supplementary node \(K_\top \) for handling the cases when the considered value v is greater than (or noncomparable with) the value I(x) associated to x, in order to define \(\kappa \) on the whole Cartesian set \(\varLambda \times V\). This does not induce any algorithmic nor structural issue.
 5.
Actually, it is surjective iff \(\kappa ^{1}(\{K_\top \}) \ne \emptyset \). If \(\kappa ^{1}(\{K_\top \}) = \emptyset \), we can simply define \(\kappa : \varLambda \times V \rightarrow \varTheta \), and then the surjectivity property still holds.
 6.
The exact upper bound of \(\gamma \) is actually \((\varOmega 1)^2 / 4\varOmega \), and is reached when \(\varLambda = (\varOmega +1)/2\).
 7.
The adjacency relation \(\smallfrown _L\) is indeed a subset of the Cartesian product \(\varLambda \times \varLambda \); so it would be sufficient to define the function \(\sigma :\ \smallfrown _L\ \rightarrow 2^V\). However, such notation is unusual and probably confusing for many readers; then, we prefer to define, without loss of correctness, \(\sigma : \varLambda \times \varLambda \rightarrow 2^V\), with useless empty values for the couples \((x_1,x_2) \in \varLambda \times \varLambda \) such that \(x_1 \not \smallfrown _\varLambda x_2\).
 8.
Actually, half of these borders, due to the symmetry of the configurations, see Property 14.
 9.
For instance, for a digital images in \({\mathbb {Z}}^3\) (resp. \({\mathbb {Z}}^2\)) of size \(\varOmega  = N^3\) (resp. \(N^2\)), we may expect a space cost of \({\mathcal {O}}(N^2)\) (resp. \({\mathcal {O}}(N^1)\)), i.e. \(\delta = 2/3\) (resp. \(\delta = 1/2\)).
 10.
A sufficient condition is to model \((V,\leqslant )\) as an intermediate datastructure of space cost \({\mathcal {O}}(V^2)\).
References
 1.Salembier, P., Wilkinson, M.H.F.: Connected operators: a review of regionbased morphological image processing techniques. IEEE Signal Process. Mag. 26, 136–157 (2009)CrossRefGoogle Scholar
 2.Salembier, P., Oliveras, A., Garrido, L.: Antiextensive connected operators for image and sequence processing. IEEE Trans. Image Process. 7, 555–570 (1998)CrossRefGoogle Scholar
 3.Monasse, P., Guichard, F.: Scalespace from a level lines tree. J. Vis. Commun. Image Represent. 11, 224–236 (2000)CrossRefGoogle Scholar
 4.Salembier, P., Garrido, L.: Binary partition tree as an efficient representation for image processing, segmentation and information retrieval. IEEE Trans. Image Process. 9, 561–576 (2000)CrossRefGoogle Scholar
 5.Passat, N., Naegel, B.: An extension of componenttrees to partial orders. In: ICIP, 3981–3984. (2009)Google Scholar
 6.Passat, N., Naegel, B.: Componenttrees and multivalued images: structural properties. J. Math. Imaging Vis. 49, 37–50 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
 7.Kurtz, C., Naegel, B., Passat, N.: Connected filtering based on multivalued componenttrees. IEEE Trans. Image Process. 23, 5152–5164 (2014)MathSciNetCrossRefGoogle Scholar
 8.Naegel, B., Passat, N.: Towards connected filtering based on componentgraphs. In: Hendriks, C.L.L., Borgefors, G., Strand, R. (eds.) ISMM 2013. LNCS, vol. 7883, pp. 353–364. Springer, Heidelberg (2013). doi: 10.1007/9783642382949_30 CrossRefGoogle Scholar
 9.Naegel, B., Passat, N.: Colour image filtering with componentgraphs. In: ICPR, pp. 1621–1626 (2014)Google Scholar
 10.Carlinet, E., Géraud, T.: MToS: a tree of shapes for multivariate images. IEEE Trans. Image Process. 24, 5330–5342 (2015)MathSciNetCrossRefGoogle Scholar
 11.Xu, Y., Géraud, T., Najman, L.: Connected filtering on treebased shapespaces. IEEE Trans. Pattern Anal. Mach. Intell. 38, 1126–1140 (2016)CrossRefGoogle Scholar
 12.Grossiord, É., Naegel, B., Talbot, H., Passat, N., Najman, L.: Shapebased analysis on componentgraphs for multivalued image processing. In: Benediktsson, J.A., Chanussot, J., Najman, L., Talbot, H. (eds.) ISMM 2015. LNCS, vol. 9082, pp. 446–457. Springer, Cham (2015). doi: 10.1007/9783319187204_38 CrossRefGoogle Scholar
 13.Perret, B., Cousty, J., Tankyevych, O., Talbot, H., Passat, N.: Directed connected operators: asymmetric hierarchies for image filtering and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 37, 1162–1176 (2015)CrossRefGoogle Scholar