1 Introduction

Manga are Japanese black-and-white comics (Fig. 1). They are popular worldwide. Nowadays, manga are distributed not only in print, but also electronically via online stores. Some e-manga archives are very large; e.g., the Amazon Kindle store sells more than 130,000 e-manga titles.Footnote 1 Therefore, users of online stores must retrieve titles from large collections when they purchase new manga.

Fig. 1
figure 1

Manga examples. a A manga book (title) consists of 200 pages on average. b A page (an image) is composed of several frames (rectangular areas). c Characters, text balloons, and background are drawn in each frame. Typically, manga are drawn and printed in black and white

However, current e-manga archives offer very limited search support, i.e., keyword-based search by title or author. They are not suitable for large-scale search and searches cannot take the images (contents) of manga into consideration. For this reason, applying content-based multimedia retrieval techniques to manga search would have the potential to make the manga-search experience more intuitive, efficient, and enjoyable.

In this paper, we propose a content-based manga retrieval framework. The framework is composed of three steps: labeling of margin areas, an objectness-based edge orientation histogram (EOH) feature description [35] with screentone removal [23], and approximate nearest-neighbor (ANN) search using product quantization (PQ) [26]. For querying, the framework provides a sketch-based interface. Based on the interface, two interactive reranking schemes are proposed for an intuitive search: relevance feedback and query retouch. The main contribution of this paper is to build an efficient large-scale framework for manga search.

An overview of the GUI implementation of the proposed system is shown in Fig. 2. Given a query sketch (Fig. 2a), the system retrieves similar areas from a manga dataset in real time, and shows the top results in thumbnail windows (Fig. 2c). The retrieval is performed automatically after each stroke is completed. If a user clicks on one of the retrieved results in the thumbnail windows, a page containing the result is shown in a preview window (Fig. 2d). In this case, the page containing the first-ranked result is shown. Please see the supplemental video for more details.

Fig. 2
figure 2

An interface for the retrieval system. a A canvas for the user’s sketch. b Visualized EOH features, where the left figure shows a query and the right shows the retrieved result. c Thumbnail windows for the retrieved results. d A preview window of a manga page

There are two technical challenges associated with content-based manga retrieval. First, image description for naturalistic images may not be suited to describing manga images because the visual characteristics of manga images are different from those of naturalistic images. Second, it is necessary to retrieve not only an image but also a part of the image because a manga page comprises several frames (rectangular areas). Both problems are tested using the proposed framework.

For evaluation, we built a novel dataset of manga images drawn by professional manga artists, Manga109, which consists of 109 comic books with a total of 21,142 pages. Manga109 will be very beneficial for the multimedia research community because it has been difficult for researchers to use “real” manga in experiments due to the copyright problem. Research so far has been limited to the small scale. We made this dataset with permission from 94 professional creators. The dataset covers a wide range of manga genres, and is publicly available for academic use.

Our contributions are summarized as follows:

  • We built an efficient large-scale framework for manga search. The framework enables us to retrieve a manga image in 70 ms from 21,142 pages with 204 MB using a single PC.

  • We provided a publicly available manga dataset, Manga109, which includes 109 manga titles containing 21,142 pages that will be useful for manga image-processing research.

  • We verified that EOH with screentone removal is effective to manga image retrieval compared to those used in the state of the art sketch based retrieval system [54, 62].

  • As far as we know, this is the first work to make use of sketch to retrieve manga or comics.

A preliminary version of this paper appeared in our recent conference paper [39]. This paper contains significant differences: (1) we constructed and released the publicly available manga dataset; (2) we significantly improved the proposed algorithm by leveraging an objectness measure, PQ, and screentone removal; and (3) we performed massive experimental evaluations using the new dataset, including comparisons with state-of-the-art methods.

The rest of the paper is organized as follows: Section 2 introduces related work. Section 3 presents our retrieval system and Section 4 shows the proposed query interaction schemes. A novel dataset is introduced in Section 5 and evaluations are given in Section 6.

2 Related work

We discuss related studies in content-based manga retrieval by categorizing them into manga description, retrieval–localization, ANN search, and querying. Additionally, we introduce research related to manga.

2.1 Manga image description

In the literature of content-based image retrieval, many methods have been proposed to describe images. A query and dataset images are converted to some form of vector representation to evaluate their similarity. A typical form is the bag-of-features (BoF) representation [57], in which local features such as SIFT [37] are extracted from an image and quantized into a sparse histogram. The distance between a query histogram and a dataset histogram can be computed efficiently using an inverted index structure. The BoF form has been studied widely because it achieved successful image description with fast computation.

Various extensions of BoF have been proposed so far, including query expansion [12], soft voting [45], and Hamming embedding [24]. The state-of-the-art in this line are the vector of locally aggregated descriptors (VLAD) and Fisher vector (FV) approaches [27, 59], which are interpreted as generalized versions of BoF [52].

However, all of the above methods were designed for naturalistic images for both query and dataset. For example, texture-based features such as Local Binary Pattern (LBP) [42] is not effective for manga because manga images do not contain texture information (Fig. 3). Thus, special attention has been paid to cases in which a query is a line drawing, i.e., a sketch.

Fig. 3
figure 3

Poor performance of LBP. For each image, streentone is removed [23] , and the LBP feature is extracted. The differences between LBP features are then computed. Clearly, the visual characteristics of (a) and (b) are similar, whereas (a) and (c) are dissimilar. However, the distance between the LBP features for (a) and (b) is 1464.6, and between (a) and (c) is 396.7. This illustrates that the LBP feature cannot express visual similarities of line drawings. Note that the results become even worse if we do not remove screentone

If a query is presented as a form of sketch, several special features tailored for sketch queries have been proposed. BoF using a histogram of oriented gradients [13] was proposed [16, 54, 75]. Because sketch images usually have a flat background of uniform white pixels, several methods tried to fill such blank areas with meaningful values and interpreted them as a feature vector; e.g., distance-transformed values from edge pixels [65] and Poisson image-editing-based interpolation values [21, 22]. Chamfer matching, which is a successful method for computing the similarity between two contours of objects, was incorporated in the sketch-based retrieval framework [8, 48, 62].

In the manga feature description task, we compared the proposed method with (1) BoF, (2) large-window FV as the state-of-the-art of BoF-based methods [54], and (3) compact oriented chamfer matching (Compact OCM) as the state-of-the-art of chamfer-based methods [62].

Note that deep-learning techniques are becoming quite popular in image recognition because of their superior performance. These technologies are being applied to the sketch processing. However, the effectiveness of such methods has not yet been discussed sufficiently. For example, Yu and colleagues [72] reported that traditional methods such as FV are still competitive for a sketch recognition task, and therefore such deep learning-based features are beyond the scope of this paper.

2.2 Retrieval and localization

As shown in Fig. 2d, our objective is to find not only a similar page, but also identify a similar region in the retrieved image. This kind of task has been tackled in spatial verification-based reranking methods [7, 25, 33, 44, 71, 74]. These methods first find candidate images by BoF, and rerank the top results by spatial information such as relative positions among extracted SIFT features. However, such methods cannot be applied to manga for two reasons. First, the manga image’s structure is more complex than that of a naturalistic image. A single manga page usually consists of several frames, each of which is a totally different visual scene (see Fig. 1b). Therefore, a manga page can be interpreted as a set of images if we treat each frame as a single image. This is a much harder condition than those usual in image processing tasks such as retrieval or recognition, which assume that at least an image is a single image rather than a set of images. Second, BoF does not work well for manga as shown in the experimental section.

Another related area is object-detection methods, in which the position of a target object is identified in an input image. A number of methods have been proposed so far. Among them, contour-based object localization methods [56, 70] are closely related to our manga problem because an object is represented by its contour. In such methods, an instance of a target object such as “giraffe” is localized from an image by matching a query contour of giraffe, which is learned from training data. If we apply such methods to all dataset pages and select the best result, we might achieve both retrieval and localization at once. However, this is not realistic because: (1) these methods usually require learning to achieve reasonable performance, i.e., many giraffe images are required to localize an instance of giraffe, but a query is usually a single sketch in our case; and (2) localization usually takes time and cannot be scaled to a large number of images.

The proposed method makes use of a window-based matching approach. It is simple but usually slow. We formulate the search and localization as a single nearest-neighbor problem, and solve it using ANN methods.

2.3 Approximate nearest neighbor search

To determine similar vectors efficiently, several approximate nearest neighbor search methods have been proposed. The two main branches of ANN methods include product-quantization-based methods [18, 26] and hashing-based methods [36, 68, 69, 73]. Hashing-based methods mainly focus on supervised searches, i.e., label information is considered, such as image categories. Product-quantization-based methods approximate a distance measure more directly, which is more closely related to our considerations. Therefore, we employ product quantization in our pipeline for an efficient search.

2.4 Querying

How to query multimedia data has been an important problem in content-based multimedia retrieval. Although many approaches have been proposed, including keywords, images, and spatial/image/group predicates [58], we focus on sketches drawn by users. Sketch-based interaction is the most natural way for humans to describe visual objects, and is widely adapted not only for image retrieval, but also in many applications such as shape modeling [43], image montage [9], texture design [29], and as a querying tool for recognizing objects [15, 54].

2.5 Manga

From an image-processing perspective, manga has a distinctive visual nature compared with naturalistic images. Several applications for manga images have been proposed: colorization [47, 53, 63], vectorization [31], layout recognition [64], layout generation [5, 20], element composition [6], manga-like rendering [46], speech balloon detection [49], segmentation [2], face tailored features [11], screen tone separation [23], and retargeting [40]. The research group at Université de La Rochelle constructed a comic database [19], which we mention in Section 5, and analyzed the structure of visual elements in a page to understand the comics content semantically [50].

Several studies have explored manga retrieval [14, 34, 60, 61]. In these methods, inputs are usually not pages but cropped frames or characters, and evaluations were conducted using a small number of examples. In addition, the runtimes of these methods have not been discussed. Therefore, it is not known whether these methods can scale to a large dataset. On the other hand, our proposed method can perform a retrieval from 21,142 pages in 70 ms on a single computer.

3 Manga retrieval system

In this section, we present the proposed manga retrieval system. Figure 4 shows the framework of the system. First, input pages are preprocessed offline (Fig. 4d, as discussed in Section 3.4). Next, region of interest (ROI) areas are detected and EOH features are extracted with screentone removal (Fig. 4a, as described in Section 3.1). These features are then compressed into PQ codes (Fig. 4b, as discussed in Section 3.2). At the query phase, given a query sketch, similar EOF regions are retrieved (Fig. 4c, as discussed in Section 3.3).

Fig. 4
figure 4

Framework of the proposed system

The proposed framework is based on the window search paradigm; i.e., several features are extracted from dataset images using a generic object detector, and a query feature is compared with all features. Compared with the traditional image retrieval problem, such a window system is usually too slow to handle a large number of features. We formulate the problem as a single ANN search, and solve it using PQ, with effective preprocessing including the labeling of margin areas.

3.1 EOH feature description of object windows with screentone removal

We first show a core part of the system, feature description (Fig. 4a). Given a page, candidate areas of objects are detected automatically (Fig. 4(a1)), then EOH features [35] are described from selected areas (Fig. 4(a2)). The retrieval is performed by matching an EOH feature of the query against the EOH features in the dataset. Note that we delete screentones in advance by applying our previously proposed method [23]. This process significantly improves both the ROI detection and the feature description. Figure 5 shows an example of the feature representation. The EOF feature of the red square area in Fig. 5a is visualized as shown in Fig. 5b. The result of the screen tone removal is shown in Fig. 5c, and the feature is visualized in Fig. 5d. Comparing these two figures, we can observe that screentone removal effectively extracts line structures of the target, and suppresses noisy features caused by screentones. EOH is not new, but we found that it is effective to sketch based manga retrieval when screentone removed is applied in advance. Hereafter EOH in this paper means EOH with screentone removal.

Fig. 5
figure 5

An example of an EOH feature. a An image and a selected area. b The visualized EOH feature extracted from the area in (a). As can be seen, the visual characteristics of a manga page can be captured using such a simple histogram. c The image with screen tone removal [23]. d The visualized EOH feature extracted from the area in (c). By removing the screen tone, the quality of the feature is improved, e.g., the left bottom area of the feature in (b) includes random directions because of the background, where the feature in (d) does not

The candidate areas of objects are detected by generic object detectors [1, 10, 66], which are successful ways to replace a sliding window paradigm. (Fig. 4(a1)). Intuitively, these detectors compute bounding boxes, which are more likely to contain any kinds of objects. These bounding boxes are considered as candidates for object instances, and a processing task such as image recognition is applied only to the bounding box areas, which greatly reduces computational costs compared with a simple sliding window approach.

We found that objectness measures can detect candidate manga image objectives more effectively than the sliding window approach. We conducted an experiment using a small dataset in which ground-truth objects (a head of a man) are annotated. The detection rate of a few methods were evaluated as shown in Fig. 6. BING [10], selective search [66], and sliding window (baseline) [39] are compared. From the results, we concluded that selective search is the best approach for this dataset. In the rest of this paper, we use selective search for our object detector. Note that a horizontally or vertically long bounding box is divided into several square boxes with overlaps because we restrict the shape of object areas to a square.

Fig. 6
figure 6

Detection rate (DR) given #WIN proposals, which is a standard evaluation metric for evaluating objectness [1, 10, 66]. DR shows the percentage of ground-truth objects covered by proposals, which are generated by each method. An object is considered covered only when the PASCAL overlap criteria is satisfied (intersection over union is more than 0.5 [17]). The area under the curve (AUC), which is a single-value summary of performance for each curve, is shown on the legends. We evaluated methods using the dataset used for the localization task, see details on Section 6.2

After object areas have been detected (typically, 600 to 1000 areas are selected from a page), their EOF features are described (Fig. 4(a2)). The selected square area is divided into c×c cells. The edges in each cell are quantized into four orientation bins and normalized, and the whole vector is then renormalized. Therefore, the dimension of the EOH vector is 4c 2. Note that we discard features in which all elements are zero. For manga, the features should be robust against scale changes because we want to find areas of any size that are similar to the input sketch. For example, if a certain character’s face is the target, it can be either small or large. To achieve matching across different scales, we simply accept any sizes of patches, but restrict the shape of an area to square. Whatever the patch sizes, the same-dimensional EOH features are extracted and used for matching.

By describing EOF features of candidate areas, a page is represented by a set of EOH features:

$$ \mathcal{X} = {\left\{\boldsymbol{x}_{n}\right\}}_{n=1}^{N}, $$
(1)

where \(\mathcal {X}\) means the page, x n denotes an EOH feature for the nth window, and N is the number of features extracted from the page. Note that integral images are utilized to enable the fast computing of features in the same manner as in [35]. In the comparative study, we show this simple representation achieves better accuracy than previous sketch-based retrieval methods, and we confirm that EOH-based description gives a good solution to the manga image representation problem.

In summary, given a page, screenton removal is applied, candidate areas are detected by selective search, and EOH features are extracted, then the page is represented as a set of EOH features. This window-based representation is simple and efficient at finding a matching area in the page because a feature implicitly contains its position in the page, which is particularly useful for manga pages.

3.2 Feature compression by product quantization

Given a set of EOH features, we compress them into binary codes using PQ [26], which is a successful ANN method, as shown in Fig. 4b. Applying PQ greatly reduces the computational cost of matching and reduces memory usage. After the features are compressed to binary codes, the approximate distances between a query vector and dataset codes can be computed efficiently by lookup operations (asymmetric distance computation (ADC) [26]). Therefore, the search can be performed efficiently even for large number of dataset vectors.

Each feature x n is compressed into a binary code, q(x n ), and the page is represented as

$$ \mathcal{X}_{PQ} = {\left\{\boldsymbol{q}(\boldsymbol{x}_{n})\right\}}_{n=1}^{N}, $$
(2)

where \(\mathcal {X}_{PQ}\) is a set of quantized EOH features for a page. Because each q(x n ) is recorded as a tuple of M subcentroid indices (8 bits uint8_t), which take 8M bits, \(\mathcal {X}_{PQ}\) is stored by 8M N bits.

3.3 Search

Let us describe how to retrieve the nearest PQ-encoded feature from many manga pages (Fig. 4c). Suppose there are P manga pages, and the quantized EOH features of the pth page are represented as \({\mathcal {X}}_{PQ}^{p} ={\left \{\mathbf {q}(\mathbf {x}_{n}^{p})\right \}}_{n=1}^{N^{p}}\), where N p means the number of EOH features from the pth page, and p∈{1,…,P}. Given an EOH feature y from a query sketch, the search engine computes the nearest neighbors using:

$$ \left< p^{*}, n^{*} \right> = \underset{p \in \{1, \dots, P \}, ~n \in \{ 1, \dots, N^{p} \} }{\arg\min} d_{AD} \left( \boldsymbol{y}, \boldsymbol{q}(\boldsymbol{x}_n^p) \right), $$
(3)

where d A D (⋅,⋅) measures an approximate Euclidean distance between an uncompressed vector and a compressed PQ code [26]. This can be computed efficiently by lookup operations (ADC [26]).

By computing d A D from y to each \(\boldsymbol {q}(\boldsymbol {x}_{n}^{p})\) and selecting the smallest one, p and n are computed. This is a linear search problem for PQ codes, which can be solved efficiently; e.g., 16 ms for searching one million 64-bit codes. Then we can say that the n th feature from the p th page is the nearest to the query sketch. The point here is that the problem is separated into a feature description and an ANN search. Therefore, even if the number of features is very large, we can employ the ANN techniques. In our system, searching from 21,142 pages takes 793 ms (without parallel thread implementation) or 70 ms (with parallel thread implementation) on a single computer using a PQ linear scan, which is fast enough. The search could be faster if we employ ANN data structures such as [3, 28].

3.4 Skipping margins

Although PQ computing is fast, effective preprocessing of the manga pages of the dataset is essential (Fig. 4d). A manga page comprises a series of frames and the interframe spaces (margins) are not important. For efficient searching, margin exclusion from retrieval candidates should be performed before extracting features. We label the margins as shown in Fig. 7. First, the lines of the manga page image (Fig. 7a) are thickened by applying an erosion [55] to the white areas (Fig. 7b), thereby filling small gaps between black areas. Next, the white-connected areas are labeled by connected-component labeling [38] as shown in Fig. 7c, where different colors for the labels have been used for visualization. Finally, areas are selected as margins by finding the most frequent label appearing in the outermost peripheral regions (Fig. 7d). Because interframe spaces tend to connect to the outer areas, the method succeeds in most cases.

Fig. 7
figure 7

a An input page. b Erosion is applied to white regions to thicken the lines. c White-connected areas are labeled with the same value. d The margin areas are selected

Let us define the patch area of x as S(x), and the margins as U(x) (e.g., the colored areas shown in Fig. 7d). We extract a feature only if its area ratio U/S is less than a threshold, which we set to 0.1. Equation (2) is therefore rewritten as

$$ \mathcal{X}_{PQ}=\left\{ \boldsymbol{q}(\boldsymbol{x}_{n}) \middle | \frac{U(\boldsymbol{x}_{n})}{S(\boldsymbol{x}_{n})} < 0.1,~~n \in \{1, \dots, N \} \right\}. $$
(4)

Intuitively, this means that if an area belongs to the margin, it is skipped in the feature extraction step. An example of the process is shown in Fig. 8.

Fig. 8
figure 8

a Input image. b Its margin labels. In case of (i), the red area (i) in (a) is skipped because U/S=0.6>0.1. In case (ii), in contrast, the corresponding area is all black, and the feature is therefore extracted. In case (iii), U/S=0.08<0.1, and an EOH feature is extracted

4 Query interaction

Querying is a difficult issue for manga retrieval. For a natural and intuitive interface, we prefer sketch-based queries. Because manga itself comprise sketches drawn by authors, sketching is compatible with manga.

In addition, we can make use of sketching not only for the initial query, but also for additional interaction with the retrieved results. The queries that can be performed using our framework are summarized as follows: (1) sketch querying, the proposed sketch-based retrieval described above; (2) relevance feedback, which reuses the retrieved results; and (3) query retouch, in which the results of relevance feedback are modified and reused.

Relevance feedback

We propose an interaction to reuse retrieved results, which is inspired by the relevance feedback techniques proposed in the information retrieval literature [41]. Users can reuse a retrieved result simply by selecting a region in the retrieved manga page, as shown in Fig. 9. With relevance feedback, even novice users can use professional manga images as queries. Note that a user can use any region in the page as a query.

Fig. 9
figure 9

Relevance feedback. a A user draws strokes and finds a target (Japanese-style clothes) in the third and fifth results (red borders). b By selecting the region in the retrieved page, another retrieval is performed automatically with this as the query. c A wide range of images of Japanese-style clothes is obtained

Query retouch

We propose a new methodology for modifying and reusing the results. In query retouch, we can modify either the initial sketch or a query taken from the results of a retrieval (Fig. 10). As the query is changed by adding lines or partial erasure, the results will change immediately. As a result, we can intuitively change the retrieval results in the direction that we want.

Fig. 10
figure 10

Query retouch. a Results of relevance feedback. Target characters have red borders. b A user adds strokes and draws glasses on the target character. Other target characters with glasses are then successfully retrieved (red borders). Note that users can erase lines using an eraser tool. c Both relevance feedback and query retouch can be conducted multiple times

According to Mei and colleagues [41], these interactions are classified as “interactive reranking,” and are especially useful for the visual search problem because “visual data is particularly suited for interaction, as a user can quickly grasp the vivid visual information and thus judge the relevance at a quick glance.” [41].

Note that we have designed the proposed pipeline to be scale-invariant, rotation-variant, and flip-variant. Scale-invariance is required because the query should match with image regions of any size. This is achieved by extracting features of the same dimension from a given region (Section 3.1). Regarding rotations, the system does not accept results that are similar to the query if rotated. We decided on this design in order to inhibit unusual results, such as upside-down faces. Regarding flips, we decided to reject flipped results in order to make the sketch interaction more intuitive. Note that a flip-invariant search is technically easy to implement, simply by flipping the query feature and performing the retrieval again.

5 Manga dataset

In this section, we introduce a new dataset of manga images, namely the Manga109 dataset for evaluation. The Manga109 dataset consists of 109 manga titles, and is made publicly available for academic research purposes with proper copyright notation. Figure 11 shows example pages from the Manga109 dataset.

Fig. 11
figure 11

Example images from the Manga109 dataset. Each caption shows the bibliographic information of an image (title, volume, and page)

A large-scale dataset of manga images is important and useful for manga research. Although several papers related to manga image processing have been published, fair comparisons among the proposed methods have not been conducted because of the lack of a dataset with which the methods can be evaluated. In preparing a dataset, the most serious and intractable problem is copyright. Because manga is artwork and protected by copyright, it is hard to construct a publicly available dataset. Therefore, researchers have collected their own manga images or used manga images drawn by amateurs. These are not appropriate for manga research because: (1) the number of manga images is small; (2) the artistic quality of the manga images is not always high; and (3) they are not publicly available. The Manga109 dataset, on the other hand, clears these three issues. It contains 21,142 pages in total, and we can say that it is currently the largest manga image dataset. The quality of the images is high because all titles were drawn by professional manga authors.

As is widely known, well-crafted image datasets have played critical roles in evolving image processing technologies, e.g., the PASCAL VOC datasets [17] for image recognition in the 2000s, and ImageNet [51] for recent rapid progress in deep architecture. We hope the Manga109 dataset will contribute to the further development of the manga research domain.

Image collection

All manga titles in the Manga109 dataset have been previously published in manga magazines,Footnote 2 i.e., they were drawn by professional manga authors. The manga titles are in an archive “Manga Library Z” run by J-Comic Terrace. It has more than 3,000 titles, most of which are currently out of print. With the help of J-Comic Terrace, we chose 109 titles from the archive that cover a wide range of genres and publication years. We obtained permission from the creators to use them for academic research purposes. Thus researchers can use them freely with appropriate citation for their research challenges, including not only for retrieval and localization, but also for colorization, data mining from manga, and so on. The manga titles were originally published from the 1970s to the 2010s. The Manga109 dataset covers various kinds of categories, including humor, battle, romantic comedy, animal, science fiction, sports, historical drama, fantasy, love romance, suspense, horror, and four-frame cartoons. Please see the details (titles, author names, and so on) in our project page [32].

Dataset statistics

The dataset consists of 109 manga titles. Each title includes 194 pages on average, with a total of 21,142 pages. The average size of images is 833×1179, which is bigger than the image sizes usually used for object recognition and retrieval tasks, e.g., 482×415 pixels on average for ILSVRC 2013.Footnote 3 This is because manga pages contain multiple frames and thus require higher resolutions.

Relation to other datasets

eBDtheque [19] is a publicly available comic dataset. It consists of 100 comic pages with detailed meta-information such as text annotations. Compared with eBDtheque, our Manga109 dataset contains a much larger number of pages. Manga109 is useful for large-scale experiments, whereas eBDtheque is beneficial for small but detailed evaluations such as object boundary detection.

6 Experimental results

In this section, we present three evaluations: (1) a comparative study with previous methods for manga description (Section 6.1); (2) a localization evaluation (Section 6.2); and (3) a large-scale qualitative study (Section 6.3). In the comparative study and the localization evaluation, we used a single-thread implementation for a fair comparison, and employed a parallel implementation for the large-scale study.

6.1 Comparative study

A comparative study was performed to evaluate how well the proposed framework could represent manga images compared with previous methods such as those introduced in Section 2. We compared our proposal with a baseline (BoF with large window SIFT [54]), the state-of-the-art of BoF-based methods (FV [54]), and the state-of-the-art of chamfer-based methods (Compact OCM [62]). All experiments were conducted on a PC with a 2.8 GHz Intel Core i7 CPU and 32 GB RAM, using C++ implementations.

Frame image dataset

For evaluation, we cropped frames from 10 representative manga titles from the Manga109 dataset.Footnote 4 There were 8,889 cropped frames, and the average size was 372×341. We used these frames for retrieval. Note that we used frames instead of pages for this comparison because the features of BoF, FV, and Compact OCM are only comparable within a frame. Although the frame is less complex than the page, retrieval is not easy because the size of frames varies greatly, and the localization problem still exists as shown in Fig. 12c, where the target (a head of a boy) is small and the frame includes other objects and backgrounds.

Fig. 12
figure 12

Targets for the comparative study. Left to right: query sketches by novice artists, skilled artists, ground-truth images, and ground-truth images with screentone removal. Top to bottom: targets, Boy-with-glasses, Chombo, and Tatoo

Target images

To make the comparisons, we chose three kinds of targets: Boy-with-glasses (Fig. 12c), Chombo (Fig. 12g), and Tatoo (Fig. 12k), which are from manga titles Lovehina vol. 1, Mukoukizu no Chombo, and DollGun, respectively. Boy-with-glasses is an easy example, Chombo is much harder because the face might be either to the right or left, and Tatoo is the most difficult because it is so small compared with the size of the frame. These target images are treated as ground truths.

Query images

To prepare query sketches, we invited 10 participants (seven novices and three skilled artists, such as a member of an art club), and asked them to draw sketches. First, we showed them query images for 20 s. Next, they were asked to draw a sketch for the query. Each participant drew all target objects; therefore, we collected 10×3=30 queries. Examples of queries are shown in Fig. 12. We used a 21-inch pen display (WACOM DTZ-2100) for drawing.

Results of comparative study

Using the queries, we evaluated each method using standard evaluation protocols in image retrieval: recall@k and mean average precision [4]. Note that the ground-truth target images were labelled manually. All frame images were used for the evaluation, and the images of the different targets were regarded as distractors. The statistics of the frame images are shown in Table 1.

Table 1 Image statistics for comparative study. All images are cropped frames from the Manga109 dataset

Figure 13 shows the results for each target. Note that this task is challenging, and all scores tend to be small. The queries from users are not always similar to the target. On the contrary, some novices even drew queries that were dissimilar to the target, as shown in Fig. 12e. Still, in all cases, the proposed method achieved the best scores. In particular, the proposed method outperformed the other methods for Boy-with-glasses. Note that BoF and FV received almost zero scores for the Tatoo case because they are not good at finding a relatively small instance from an image. From these experiments, we can say that BoF-based methods do not satisfy the purpose in manga retrieval.

Fig. 13
figure 13

Results of the comparative study. Values in the legend show Recall@100

Note that we did not apply any approximation steps for a fair comparison. Compact OCM measures an original chamfer distance without an approximation (a sparse projection). We did not compress a feature using PQ in this experiment.

About quantization

When we applied PQ to the features, the score decreased according to the quantization level, as shown in Fig. 14. There is a clear trade-off between the rate of compression and accuracy. From the result, we accepted M=16 as a reasonable option; i.e., a feature is divided into 16 parts for PQ compression and encoded to a 16-byte code. Note that, interestingly, there is not a clear relation between accuracy and the number of cells (c 2). The feature description becomes finer with a large number of cell divisions, but it does not always achieve higher recall. This indicates that some level of abstraction is required for the sketch retrieval task. We employed c=8 for all experiments, i.e., a selected area is divided into 8×8 areas for feature description.

Fig. 14
figure 14

The effect of feature compression by PQ. The Y-axis represents the retrieval performance. The X-axis shows the number of cells. Each line corresponds to a compression level. As the features are compressed by PQ (i.e., the feature is represented by a smaller number of subvectors (M)), the score decreases compared with the original uncompressed feature

Parameter settings and implementation details

We show parameter settings and implementation details used for the evaluation. For BoF, SIFT features are densely extracted where the size of a SIFT patch is 64×64 pixels, with a 2×2 spatial pyramid, and the number of vectors in the dictionary is set to 1024. The final number of dimension is 4096. For FV, a Gaussian Mixture Model with 256 Gaussians was used, with a 2×2 spatial pyramid. The dimensions of SIFT are reduced from 128 to 80 by PCA. For BoF and FV, we leveraged the same parameter settings as in Schneider and colleagues [54], with the vlfeat implementation [67]. For selective search, we employed the dlib implementation [30], with little Gaussian blurring before applying selective search. To train code words for BoF, FV, and PQ, we randomly selected 500 images from the dataset. Note that they were excluded and not used for testing. To eliminate small patches, we set a minimum length of a patch as 100 pixels and discarded patches that were smaller than that.

6.2 Localization evaluation

Next, we evaluated how well the proposed method can localize a target object in manga pages. The setup is similar to that for image detection evaluation [17].

Images

As query sketches, we used the Boy-with-glasses sketches collected in Section 6.1. We prepared two datasets for evaluation. (i) A Lovehina dataset, which is a title of Lovehina vol.1, and consists of 192 pages, including the pages containing ground-truth windows. (ii) The Manga109 dataset, which is the sum of all manga data, consists of 109 titles with a total of 21,142 pages. Note that the Lovehina data is included in the Manga109 dataset, so (i) is a subset of (ii). The ground-truth area (69 windows) were manually annotated from a Lovehina dataset.

In contrast to the previous comparative study, this is an object localization task, i.e., given a query image, find a target instance in an image. In our case, we must find the target from many manga pages (21,142 pages, for Manga109).

Evaluation criteria

For evaluation, we employed a standard PASCAL overlap criterion [17]. Given a bounding box (retrieved result) from the method, it is judged to be true or false by measuring the overlap of the bounding box and ground-truth windows. Denote the predicted bounding box as B p , and the ground-truth bounding box as B g t , the overlap is measured by:

$$ r = \frac{area(B_{p} \cap B_{gt})}{area(B_{p} \cup B_{gt})}. $$
(5)

We judged the retrieved area is true if r>0.5. If multiple bounding boxes are produced, at most one among them is counted as correct.

For each query, we find the top 100 areas using the proposed retrieval method from the dataset (Lovehina or Manga109), then the retrieved areas are judged using (5). Then we can compute the standard mAP@100 from the result (true/positive sequence).

Result of localization evaluation

We show the results in Table 2. With our single implementation, searching the Manga109 dataset (14M patches) took 331 ms. This is fast enough for interaction, and the computation cen be further improved (70 ms) using a parallel implementation as discussed in Section 6.3. We also show theoretical values of memory consumption for EOH features (#patch ×8M). The whole Manga109 dataset consumes only 204 MB. As can be seen from mAP, this task is difficult because there are possibly hundreds of thousands of candidate areas in the windows (138K for Lovehina, and 14M for Manga109) for only 69 ground-truth areas. Examples of retrieved results are shown in Fig. 15. For the Lovehina data, the first result is a failure but the second is correct. For the Manga109 dataset, the first success can be found at the 35th result. The first result shares similar characteristics to the query (wearing glasses) even though the result is incorrect.

Fig. 15
figure 15

Examples of localization experiments for the Lovehina dataset and Manga109 dataset

Table 2 Results for localization evaluation (single thread implementation)

6.3 Large-scale qualitative study

In this section, we show a qualitative study of retrieval from the Manga109 dataset. The whole system was implemented using a GUI as shown in Fig. 2.

We employed a parallel implementation using the Intel Thread Building Library, and the average computation time was 70 ms for the Manga109 dataset (21,142 images). The parallelization was straightforward. Neighbors were computed for each manga title in parallel. In the implementation, we selected the most similar feature from a page (not keeping all features per page), then merged the results.

Qualitative study using a sketch dataset

We qualitatively evaluated the proposed method using a public sketch dataset as queries. We used representative sketches [15] as queries. The 347 sketches each had a category name, e.g., “panda.”

Figure 16a and b show successful examples. We could retrieve objects from the Manga109 dataset successfully. In particular, the retrieval works well if the target consists of simple geometric shapes such as squares, as shown in Fig. 16b. This tendency is the same as that for previous sketch-based image retrieval systems [16, 62]. Figure 16c shows a failure example, although the retrieved glass is similar to the query.

Fig. 16
figure 16

Results of the subjective study using representative sketches [15] as queries

As can be seen in Fig. 16c, text regions are sometimes retrieved and placed at the top of the ranking. Because users usually do not require such results, detecting and eliminating text areas would improve the results, which remains as a future work.

More results by relevance feedback

We show more results from queries of character faces using the proposed relevance feedback in Fig. 17. We see that the top retrieved results were the same as (or similar to) the characters in the query. In this case, all the results were drawn by the same author. Interestingly, our edge histogram feature captured the characteristic of authors. Figure 17b shows the results of “blush face.” In Japanese manga, such blush faces are represented by hatching. By the relevance feedback, blushed characters were retrieved from various kinds of manga titles. These character-based retrievals are made possible by content-based search. This suggests that the proposed query interactions are beneficial for manga search.

Fig. 17
figure 17

More results for relevance feedback from Manga109 dataset

7 Conclusions

We proposed a sketch-based manga retrieval system and novel query schemes. The retrieval system consists of three steps: labeling of margin areas, EOH feature description with screen tone removal, and approximate nearest-neighbor search using product quantization. The query schemes include relevance feedback and query retouch, both of which are new schemes that interactively rerank the retrieved results. We built a new dataset of manga images, Manga109, which consists of 21,142 manga images drawn by 94 professional manga artists. To the best of our knowledge, Manga109 is currently the biggest manga image dataset. It is available to the research community. Experimental results showed that the proposed framework is efficient and scalable (70 ms from 21,142 pages using a single computer with 204 MB).

Combining the sketch- and keyword-based searches is a promising direction for future work.