Scaling up sign spotting through sign language dictionaries

The focus of this work is $\textit{sign spotting}$ - given a video of an isolated sign, our task is to identify $\textit{whether}$ and $\textit{where}$ it has been signed in a continuous, co-articulated sign language video. To achieve this sign spotting task, we train a model using multiple types of available supervision by: (1) $\textit{watching}$ existing footage which is sparsely labelled using mouthing cues; (2) $\textit{reading}$ associated subtitles (readily available translations of the signed content) which provide additional $\textit{weak-supervision}$; (3) $\textit{looking up}$ words (for which no co-articulated labelled examples are available) in visual sign language dictionaries to enable novel sign spotting. These three tasks are integrated into a unified learning framework using the principles of Noise Contrastive Estimation and Multiple Instance Learning. We validate the effectiveness of our approach on low-shot sign spotting benchmarks. In addition, we contribute a machine-readable British Sign Language (BSL) dictionary dataset of isolated signs, BSLDict, to facilitate study of this task. The dataset, models and code are available at our project page.

Abstract The focus of this work is sign spottinggiven a video of an isolated sign, our task is to identify whether and where it has been signed in a continuous, co-articulated sign language video. To achieve this sign spotting task, we train a model using multiple types of available supervision by: (1) watching existing footage which is sparsely labelled using mouthing cues; (2) reading associated subtitles (readily available translations of the signed content) which provide additional weak-supervision; (3) looking up words (for which no co-articulated labelled examples are available) in visual sign language dictionaries to enable novel sign spotting. These three tasks are integrated into a unified learning framework using the principles of Noise Contrastive Estimation and Multiple Instance Learning. We validate the effectiveness of our approach on low-shot sign spotting benchmarks. In addition, we contribute a machine-readable British Sign Language (BSL) dictionary dataset of isolated signs, BslDict, to facilitate study of this task. The dataset, models and code are available at our project page.

Introduction
The objective of this work is to develop a sign spotting model that can identify and localise instances of signs within sequences of continuous sign language. Sign languages represent the natural means of communication for deaf communities [56] and sign spotting has a broad range of practical applications. Examples include: indexing videos of signing content by keyword to enable content-based search; gathering diverse dictionaries of sign exemplars from unlabelled footage for linguistic study; automatic feedback for language students via an "auto-correct" tool (e.g. "did you mean this sign?"); making voice activated wake word devices available to deaf communities; and building sign language datasets by automatically labelling examples of signs.
Recently, deep neural networks, equipped with largescale, labelled datasets produced considerable progress in audio [23,59] and visual [42,53] keyword spotting in spoken languages. However, a direct replication of these keyword spotting successes in sign language requires a commensurate quantity of labelled data (note that modern audiovisual spoken keyword spotting datasets contain millions of densely labelled examples [2,19]), but such datasets are not available for sign language.
It might be thought that a sign language dictionary would offer a relatively straightforward solution to the sign spotting task, particularly to the problem of covering only a limited vocabulary in existing large-scale corpora. But, unfortunately, this is not the case due to the severe domain differences between dictionaries and continuous signing in the wild. The challenges are that sign language dictionaries typically: (1) consist of isolated signs which differ in appearance from the coarticulated 1 sequences of continuous signs (for which we ultimately wish to perform spotting); and (2) differ in speed (are performed more slowly) relative to coarticulated signing. Furthermore, (3) dictionaries only possess a few examples of each sign (so learning must Sign: Apple "We had some pretty reasonable-sized apple trees in the garden." "And who knows? An apple tree might grow, or perhaps not." "What about the apple juice, you tried that? Yeah. Is it good? Yeah." "A French apple tart which is cooked upside down." "And who first recognised that this was such a special apple?" "This is the Big Apple. This is where things are big." Time frames Similarity score Fig. 1: We consider the task of sign spotting in co-articulated, continuous signing. Given a query dictionary video of an isolated sign (e.g., "apple"), we aim to identify whether and where it appears in videos of continuous signing. The wide domain gap between dictionary examples of isolated signs and target sequences of continuous signing makes the task extremely challenging. be low shot); and as one more challenge, (4) there can be multiple signs corresponding to a single keyword, for example due to regional variations of the sign language [50]. We show through experiments in Sec. 4, that directly training a sign spotter for continuous signing on dictionary examples, obtained from an internet-sourced sign language dictionary, does indeed perform poorly.
To address these challenges, we propose a unified framework in which sign spotting embeddings are learned from the dictionary (to provide broad coverage of the lexicon) in combination with two additional sources of supervision. In aggregate, these multiple types of supervision include: (1) watching sign language and learning from existing sparse annotations obtained from mouthing cues [5]; (2) exploiting weak-supervision by reading the subtitles that accompany the footage and extracting candidates for signs that we expect to be present; (3) looking up words (for which we do not have labelled examples) in a sign language dictionary. The recent development of a large-scale, subtitled dataset of continuous signing providing sparse annotations [5] allows us to study this problem setting directly. We formulate our approach as a Multiple Instance Learning problem in which positive samples may arise from any of the three sources and employ Noise Contrastive Estimation [32] to learn a domain-invariant (valid across both isolated and co-articulated signing) representation of signing content.
Our loss formulation is an extension of InfoNCE [46,63] (and in particular the multiple instance variant MIL-NCE [41]). The novelty of our method lies in the batch formulation that leverages the mouthing annotations, subtitles, and visual dictionaries to define positive and negative bags. Moreover, this work specifically focuses on computing similarities across two different domains to learn matching between isolated and co-articulated signing.
We make the following contributions, originally introduced in [43]: (1) We provide a machine readable British Sign Language (BSL) dictionary dataset of isolated signs, BslDict, to facilitate study of the sign spotting task; (2) We propose a unified Multiple Instance Learning framework for learning sign embeddings suitable for spotting from three supervisory sources; (3) We validate the effectiveness of our approach on a co-articulated sign spotting benchmark for which only a small number (low-shot) of isolated signs are provided as labelled training examples, and (4) achieve state-of-the-art performance on the BSL-1K sign spotting benchmark [5] (closed vocabulary). We show qualitatively that the learned embeddings can be used to (5) automatically mine new signing examples, and (6) discover "faux amis" (false friends) between sign languages. In addition, we extend these contributions with (7) the demonstration that our framework can be effectively deployed to obtain large numbers of sign examples, enabling state-of-the-art performance to be reached on the BSL-1K sign recognition benchmark [5], and on the recently released BOBSL dataset [4].

Related Work
Our work relates to several themes in the literature: sign language recognition (and more specifically sign spotting), sign language datasets, multiple instance learning and low-shot action localization. We discuss each of these themes next.
Sign language recognition. The study of automatic sign recognition has a rich history in the computer vision community stretching back over 30 years, with early methods developing carefully engineered features to model trajectories and shape [30,37,54,57]. A series of techniques then emerged which made effective use of hand and body pose cues through robust keypoint estimation encodings [10,22,45,49]. Sign language recognition also has been considered in the context of sequence prediction, with HMMs [3,31,37,54], LSTMs [11,35,66,68], and Transformers [12] proving to be effective mechanisms for this task. Recently, convolutional neural networks have emerged as the dominant approach for appearance modelling [11], and in particular, action recognition models using spatio-temporal convolutions [16] have proven very well-suited for videobased sign recognition [5,36,39]. We adopt the I3D architecture [16] as a foundational building block in our studies. Sign language spotting. The sign language spotting problem-in which the objective is to find performances of a sign (or sign sequence) in a longer sequence of signing-has been studied with Dynamic Time Warping and skin colour histograms [60] and with Hierarchical Sequential Patterns [26]. Different from our work which learns representations from multiple weak supervisory cues, these approaches consider a fully-supervised setting with a single source of supervision and use handcrafted features to represent signs [27]. Our proposed use of a dictionary is also closely tied to one-shot/fewshot learning, in which the learner is assumed to have access to only a handful of annotated examples of the target category. One-shot dictionary learning was studied by [49] -different to their approach, we explicitly account for variations in the dictionary for a given word (and validate the improvements brought by doing so in Sec. 4). Textual descriptions from a dictionary of 250 signs were used to study zero-shot learning by [9] -we instead consider the practical setting in which a handful of video examples are available per-sign and work with a much larger vocabulary (9K words and phrases).
The use of dictionaries to locate signs in subtitled video also shares commonalities with domain adapta-tion, since our method must bridge differences between the dictionary and the target continuous signing distribution. A vast number of techniques have been proposed to tackle distribution shift, including several adversarial feature alignment methods that are specialised for the few-shot setting [44,67]. In our work, we explore the domain-specific batch normalization (DSBN) method of [18], finding ultimately that simple batch normalization parameter re-initialization is instead most effective when jointly training on two domains after pretraining on the bigger domain. The concurrent work of [40] also seeks to align representation of isolated and continuous signs. However, our work differs from theirs in several key aspects: (1) rather than assuming access to a large-scale labelled dataset of isolated signs, we consider the setting in which only a handful of dictionary examples may be used to represent a word; (2) we develop a generalised Multiple Instance Learning framework which allows the learning of representations from weakly-aligned subtitles whilst exploiting sparse labels from mouthings [5] and dictionaries (this integrates cues beyond the learning formulation in [40]); (3) we seek to label and improve performance on coarticulated signing (rather than improving recognition performance on isolated signing). Also related to our work, [49] uses a "reservoir" of weakly labelled sign footage to improve the performance of a sign classifier learned from a small number of examples. Different to [49], we propose a multiple instance learning formulation that explicitly accounts for signing variations that are present in the dictionary. Sign language datasets. A number of sign language datasets have been proposed for studying Finnish [60], German [38, 61], American [7,36,39,62] and Chinese [17,35] sign recognition. For British Sign Language (BSL), [51] gathered the BSL Corpus which represents continuous signing, labelled with fine-grained linguistic annotations. More recently [5] collected BSL-1K, a large-scale dataset of BSL signs that were obtained using a mouthing-based keyword spotting model. Further details on this method are given in Sec. 3.1. In this work, we contribute BslDict, a dictionary-style dataset that is complementary to the datasets of [5,51] -it contains only a handful of instances of each sign, but achieves a comprehensive coverage of the BSL lexicon with a 9K English vocabulary (vs a 1K vocabulary in [5]). As we show in the sequel, this dataset enables a number of sign spotting applications. While BslDict does not represent a linguistic corpus, as the correspondences to English words and phrases are not carefully annotated with glosses 2 , it is significantly larger than  (1) watching videos and learning from sparse annotation in the form of localised signs obtained from mouthings [5] (lower-left); (2) reading subtitles to find candidate signs that may appear in the source footage (top); (3) looking up corresponding visual examples in a sign language dictionary and aligning the representation against the embedded source segment (lower-right).

-Look-up
its linguistic counterparts (e.g., 4K videos corresponding to 2K words in BSL SignBank [29], as opposed to 14K videos of 9K words in BslDict), therefore Bsl-Dict is particularly suitable to be used in conjunction with subtitles.
Multiple instance learning. Motivated by the readily available sign language footage that is accompanied by subtitles, a number of methods have been proposed for learning the association between signs and words that occur in the subtitle text [10,20,21,49]. In this work, we adopt the framework of Multiple Instance Learning (MIL) [24] to tackle this problem, previously explored by [10,48]. Our work differs from these works through the incorporation of a dictionary, and a principled mechanism for explicitly handling sign variants, to guide the learning process. Furthermore, we generalise the MIL framework so that it can learn to further exploit sparse labels. We also conduct experiments at significantly greater scale to make use of the full potential of MIL, considering more than two orders of magnitude more weakly supervised data than [10,48].
Low-shot action localization. This theme investigates semantic video localization: given one or more query videos the objective is to localize the segment in an untrimmed video that corresponds semantically to the query video [13,28,65]. Semantic matching is too general for the sign-spotting considered in this paper. However, we build on the temporal ordering ideas explored in this theme.

Learning Sign Spotting Embeddings from Multiple Supervisors
In this section, we describe the task of sign spotting and the three forms of supervision we assume access to. Let X L denote the space of RGB video segments containing a frontal-facing individual communicating in sign language L and denote by X single L its restriction to the set of segments containing a single sign. Further, let T denote the space of subtitle sentences and V L = {1, . . . , V } denote the vocabulary-an index set corresponding to an enumeration of written words that are equivalent to signs that can be performed in L 3 .
Our objective, illustrated in Fig. 1, is to discover all occurrences of a given keyword in a collection of continuous signing sequences. To do so, we assume access to: (i) a subtitled collection of videos containing continuous signing, S = {(x i , s i ) : i ∈ {1, . . . , I}, x i ∈ X L , s i ∈ T }; (ii) a sparse collection of temporal sub-segments of these videos that have been annotated with their cor- To address the sign spotting task, we propose to learn a data representation f : X L → R d that maps video segments to vectors such that they are discriminative for sign spotting and invariant to other factors of variation. Formally, for  [5]: (Left, the annotation pipeline): Stage 1: for a given sign (e.g. "happy"), each instance of the word in the subtitles provides a candidate temporal segment where the sign may occur (the subtitle timestamps are padded by several seconds to account for the asynchrony between audioaligned subtitles and signing interpretation); Stage 2: a mouthing visual keyword spotter uses the lip movements of the signer to perform precise localisation of the sign within this window. (Right): Examples of localised signs through mouthings from the BSL-1K dataset-produced by applying keyword spotting for a vocabulary of 1K words.
any labelled pair of video segments (x, v), (x , v ) with x, x ∈ X L and v, v ∈ V L , we seek a data representation, f , that satisfies the constraint δ f (x)f (x ) = δ vv , where δ represents the Kronecker delta.

Sparse annotations from mouthing cues
As the source of temporal video segments with corresponding word annotations, M, we make use of automatic annotations that were collected as part of our prior work on visual keyword spotting with mouthing cues [5], which we briefly recap here. Signers sometimes mouth a word while simultaneously signing it, as an additional signal [8,55,56], performing similar lip patterns as for the spoken word. Fig. 3 presents an overview of how we use such mouthings to spot signs.
As a starting point for this approach, we assume access to TV footage that is accompanied by: (i) a frontal facing sign language interpreter, who provides a translation of the spoken content of the video, and (ii) a subtitle track, representing a direct transcription of the spoken content. The method of [5] first searches among the subtitles for any occurrences of "keywords" from a given vocabulary. Subtitles containing these keywords provide a set of candidate temporal windows in which the interpreter may have produced the sign corresponding to the keyword (see Fig. 3, Left, Stage 1).
However, these temporal windows are difficult to make use of directly since: (1) the occurrence of a keyword in a subtitle does not ensure the presence of the corresponding sign in the signing sequence, (2) the subtitles themselves are not precisely aligned with the signing, and can differ in time by several seconds. To address these issues, [5] demonstrated that the sign corresponding to a particular keyword can be localised within a candidate temporal window -given by the padded subtitle timings to account for the asynchrony between the audio-aligned subtitles and signing interpretation -by searching for its spoken components [56] amongst the mouth movements of the interpreter. While there are challenges associated with using spoken components as a cue (signers do not typically mouth continuously and may only produce mouthing patterns that correspond to a portion of the keyword [56]), it has the significant advantage of transforming the general annotation problem from classification (i.e., "which sign is this?") into the much easier problem of localisation (i.e., "find a given token amongst a short sequence"). In [5], the visual keyword spotter uses the candidate temporal window with the target keyword to estimate the probability that the sign was mouthed at each time step. If the peak probability over time is above a threshold parameter, the predicted location of the sign is taken as the 0.6 second window starting before the position of the peak probability (see Fig. 3, Left, Stage 2). For building the BSL-1K dataset, [5] uses a probability threshold of 0.5 and runs the visual keyword spotter with a vocabulary of 1,350 keywords across 1,000 hours of signing. A further filtering step is performed on the vocabulary to ensure that each word included in the dataset is represented with high confidence (at least one instance with confidence 0.8) in the training partition, which produces a final dataset vocabulary of 1,064 words. The resulting BSL-1K dataset has 273K mouthing annotations, some of which are illustrated in Fig. 3 (right). We employ these annotations directly to form the set M in this work.

Integrating cues through multiple instance learning
To learn f , we must address several challenges. First, as noted in Sec. 1, there may be a considerable distribution shift between the dictionary videos of isolated signs in D and the co-articulated signing videos in S. Second, sign languages often contain multiple sign variants for a single written word (e.g., resulting from regional variations and synonyms). Third, since the subtitles in S are only weakly aligned with the sign sequence, we must learn to associate signs and words from a noisy signal that lacks temporal localisation. Fourth, the localised annotations provided by M are sparse, and therefore we must make good use of the remaining segments of subtitled videos in S if we are to learn an effective representation.
Given full supervision, we could simply adopt a pairwise metric learning approach to align segments from the videos in S with dictionary videos from D by requiring that f maps a pair of isolated and co-articulated signing segments to the same point in the embedding space if they correspond to the same sign (positive pairs) and apart if they do not (negative pairs). As noted above, in practice we do not have access to positive pairs because: (1) for any annotated segment (x k , v k ) ∈ M, we have a set of potential sign variations represented in the dictionary (annotated with the common label v k ), rather than a single unique sign; (2) since S provides only weak supervision, even when a word is mentioned in the subtitles we do not know where it appears in the continuous signing sequence (if it appears at all). These ambiguities motivate a Multiple Instance Learning [24] (MIL) objective. Rather than forming positive and negative pairs, we instead form positive bags of pairs, P bags , in which we expect at least one pairing between a segment from a video in S and a dictionary video from D to contain the same sign, and negative bags of pairs, N bags , in which we expect no (video segment, dictionary video) pair to contain the same sign.
To incorporate the available sources of supervision into this formulation, we consider two categories of positive and negative bag formations, described next (a formal mathematical description of the positive and negative bags described below is deferred to Appendix C.2). Watch and Lookup: using sparse annotations and dictionaries. Here, we describe a baseline where we assume no subtitles are available. To learn f from M and D, we define each positive bag as the set of possible pairs between a labelled (foreground ) temporal segment of a continuous video from M and the examples of the corresponding sign in the dictionary (green regions in Fig A.2). The key assumption here is that each labelled sign segment from M matches at least one sign variation in the dictionary. Negative bags are constructed by (i) anchoring on a continuous foreground segment and selecting dictionary examples corresponding to different words from other batch items; (ii) anchoring on a dictionary foreground set and selecting continuous foreground segments from other batch items (red regions in Fig A.2). To maximize the number of negatives within one minibatch, we sample a different word per batch item. Watch, Read and Lookup: using sparse annotations, subtitles and dictionaries. Using just the labelled sign segments from M to construct bags has a significant limitation: f is not encouraged to represent signs beyond the initial vocabulary represented in M. We therefore look at the subtitles (which contain words beyond M) to construct additional bags. We determine more positive bags between the set of unlabelled (background) segments in the continuous footage and the set of dictionaries corresponding to the background words in the subtitle (green regions in Fig. 4, right-bottom). Negatives (red regions in Fig. 4) are formed as the complements to these sets by (i) pairing continuous background segments with dictionary samples that can be excluded as matches (through subtitles) and (ii) pairing background dictionary entries with the foreground continuous segment. In both cases, we also define negatives from other batch items by selecting pairs where the word(s) have no overlap, e.g., in Fig. 4, the dictionary examples for the background word 'speak' from the second batch item are negatives for the background continuous segments from the first batch item, corresponding to the unlabelled words 'name' and 'what' in the subtitle.
To assess the similarity of two embedded video segments, we employ a similarity function ψ : R d ×R d → R whose value increases as its arguments become more similar (in this work, we use cosine similarity). For notational convenience below, we write ψ ij as shorthand for ψ(f (x i ), f (x j )). To learn f , we consider a general- where P(i) ∈ P bags , N (i) ∈ N bags , τ , often referred to as the temperature, is set as a hyperparameter (we explore the effect of its value in Sec. 4).

Implementation details
In this section, we provide details for the learning framework covering the embedding architecture, sampling protocol and optimization procedure. Embedding architecture. The architecture comprises an I3D spatio-temporal trunk network [16] to which we attach an MLP consisting of three linear layers separated by leaky ReLU activations (with negative slope 0.2) and a skip connection. The trunk network takes as input 16 frames from a 224 × 224 resolution video clip and produces 1024-dimensional embeddings which are then projected to 256-dimensional sign spotting embeddings by the MLP. More details about the embedding architecture can be found in Appendix C.1. Joint pretraining. The I3D trunk parameters are initialised by pretraining for sign classification jointly over the sparse annotations M of a continuous signing dataset (BSL-1K [5]) and examples from a sign dictionary dataset (BslDict) which fall within their common vocabulary.
Since we find that dictionary videos of isolated signs tend to be performed more slowly, we uniformly sample 16 frames from each dictionary video with a random shift and random frame rate n times, where n is proportional to the length of the video, and pass these clips through the I3D trunk then average the resulting vectors before they are processed by the MLP to produce the final dictionary embeddings. We find that this form of random sampling performs better than sampling 16 consecutive frames from the isolated signing videos (see Appendix C.1 for more details). During pretraining, minibatches of size 4 are used; and colour, scale and horizontal flip augmentations are applied to the input video, following the procedure described in [5]. The trunk parameters are then frozen and the MLP outputs are used as embeddings. Both datasets are described in detail in Sec. 4.1.
Minibatch sampling. To train the MLP given the pretrained I3D features, we sample data by first iterating over the set of labelled segments comprising the sparse annotations, M, that accompany the dataset of continuous, subtitled sampling to form minibatches. For each continuous video, we sample 16 consecutive frames around the annotated timestamp (more precisely a random offset within 20 frames before, 5 frames after, following the timing study in [5]). We randomly sample 10 additional 16-frame clips from this video outside of the labelled window, i.e., continuous background segments. For each subtitled sequence, we sample the dictionary entries for all subtitle words that appear in V L (see Fig. 4 for a sample batch formation).
Our minibatch comprises 128 sequences of continuous signing and their corresponding dictionary entries (we investigate the impact of batch size in Sec. 4.4). The embeddings are then trained by minimising the loss defined in Eqn. (1) in conjunction with positive bags, P bags , and negative bags, N bags , which are constructed on-the-fly for each minibatch (see Fig. 4). Optimization. We use a SGD optimizer with an initial learning rate of 10 −2 to train the embedding architecture. The learning rate is decayed twice by a factor of 10 (at epochs 40 and 45). We train all models, including baselines and ablation studies, for 50 epochs at which point we find that learning has always converged. Test time. To perform spotting, we obtain the embeddings learned with the MLP. For the dictionary, we have a single embedding averaged over the video. Continuous video embeddings are obtained with sliding window (stride 1) on the entire sequence. We show the importance of using such a dense stride for a precise localisation in our ablations (Sec. 4.4). However, for simplicity, all qualitative visualisations are performed with continuous video embeddings obtained with a sliding window of stride 8.
We calculate the cosine similarity score between the continuous signing sequence embeddings and the embedding for a given dictionary video. We determine the location with the maximum similarity as the location of the queried sign. We maintain embedding sets of all variants of dictionary videos for a given word and choose the best match as the one with the highest similarity.

Experiments
In this section, we first present the datasets used in this work (including the contributed BslDict dataset) in Sec. 4.1, followed by the evaluation protocol in Sec. 4.2. We then illustrate the benefits of the Watch, Read and Lookup learning framework for sign spotting against several baselines (Sec. 4.3) with a comprehensive ablation study that validates our design choices (Sec. 4.4). Next, we investigate three applications of our method in Sec. 4.5, showing that it can be used to (i) not only spot signs, but also identify the specific sign variant that was used, (ii) label sign instances in continuous signing footage given the associated subtitles, and (iii) discover "faux amis" between different sign languages. We then provide experiments on sign language recognition, significantly improving the state of the art by applying our labelling technique to obtain more training examples automatically (Sec. 4.6 and Sec. 4.7). Finally, we discuss limitations of our sign spotting technique using dictionaries (Sec. 4.8).

Datasets
Although our method is conceptually applicable to a number of sign languages, in this work we focus primarily on BSL, the sign language of British deaf communities. We use BSL-1K [5], a large-scale, subtitled and sparsely annotated dataset of more than 1,000 hours of continuous signing which offers an ideal setting in which to evaluate the effectiveness of the Watch, Read and Lookup sign spotting framework. To provide dictionary data for the lookup component of our approach, we also contribute BslDict, a diverse visual dictionary of signs. These two datasets are summarised in Tab. 1 and described in more detail below. We further include experiments on a new dataset, BOBSL [4], which we describe in Sec. 4.7 together with results. The BOBSL dataset has similar properties to BSL-1K. BSL-1K [5] comprises over 1,000 hours of video of continuous sign-language-interpreted television broadcasts, with accompanying subtitles of the audio content. In [5], this data is processed for the task of individual sign recognition: a visual keyword spotter is applied to signer mouthings giving a total of 273K sparsely localised sign annotations from a vocabulary of 1,064 signs (169K in the training partition as shown in Tab. 1). Please refer to Sec. 3.1 and [5] for more details on the automatic annotation pipeline. We refer to Sec. 4.6 for a description of the BSL-1K sign recognition benchmark (Test Rec 2K and Test Rec 37K in Tab. 1). In this work, we process this data for the task of retrieval, extracting long videos with associated subtitles. In particular, we pad ±2 seconds around the subtitle timestamps and we add the corresponding video to our training set if there is a sparse annotation from mouthing falling within this time window -we assume this constraint indicates that the signing is reasonably well-aligned with its subtitles. We further consider only the videos whose subtitle duration is longer than 2 seconds. For testing, we use the automatic test set (corresponding to mouthing locations with confidences above 0.9). Thus we obtain 78K training (Train ReT ) and 2K test (Test ReT ) videos as shown in Tab. 1, each of which has a subtitle of 8 words on average and 1 sparse mouthing annotation.
BslDict. BSL dictionary videos are collected from a BSL sign aggregation platform signbsl.com [1], giving us a total of 14,210 video clips for a vocabulary of 9,283 signs. Each sign is typically performed several times by different signers, often in different ways. The dictionary videos are linked from 28 known website sources and each source has at least 1 signer. We used face embeddings computed with SENet- 50 Table 1: Datasets: We provide (i) the number of individual signing videos, (ii) the vocabulary size of the annotated signs, and (iii) the number of signers for several subsets of BSL-1K and BslDict. BSL-1K is large in the number of annotated signs whereas BslDict is large in the vocabulary size. Note that BSL-1K is constructed differently depending on whether it is used for the task of recognition or retrieval: for retrieval, longer signing sequences are used around individual localised signs as described in Sec. 4.1.
GFace2 [14]) to cluster signer identities and manually verified that there are a total of 124 different signers.
The dictionary videos are of isolated signs (as opposed to co-articulated in BSL-1K): this means (i) the start and end of the video clips usually consist of a still signer pausing, and (ii) the sign is performed at a much slower rate for clarity. We first trim the sign dictionary videos, using body keypoints estimated with OpenPose [15] which indicate the start and end of wrist motion, to discard frames where the signer is still. With this process, the average number of frames per video drops from 78 to 56 (still significantly larger than co-articulated signs). To the best of our knowledge, BslDict is the first curated, BSL sign dictionary dataset for computer vision research. A collection of metadata associated for the BslDict dataset is made publicly available, as well as our pre-computed video embeddings from this work.
For the experiments in which BslDict is filtered to the 1,064 vocabulary of BSL-1K, we have 3K videos as shown in Tab. 1. Within this subset, each sign has between 1 and 10 examples (average of 3).

Evaluation protocols
Protocols. We define two settings: (i) training with the entire 1,064 vocabulary of annotations in BSL-1K; and (ii) training on a subset with 800 signs. The latter is needed to assess the performance on novel signs, for which we do not have access to co-articulated labels at training. We thus use the remaining 264 words for testing. This test set is therefore common to both training settings, it is either 'seen' or 'unseen' at training. However, we do not limit the vocabulary of the dictionary as a practical assumption, for which we show benefits.
Metrics. The performance is evaluated based on ranking metrics as in retrieval. For every sign s i in the test vocabulary, we first select the BSL-1K test set clips which have a mouthing annotation of s i and then record the percentage of times that a dictionary clip of s i appears in the first 5 retrieved results, this is the 'Recall at 5' (R@5). This is motivated by the fact that different English words can correspond to the same sign, and vice versa. We also report mean average precision (mAP). For each video pair, the match is considered correct if (i) the dictionary clip corresponds to s i and the BSL-1K video clip has a mouthing annotation of s i , and (ii) if the predicted location of the sign in the BSL-1K video clip, i.e., the time frame where the maximum similarity occurs, lies within certain frames around the ground truth mouthing timing. In particular, we determine the correct interval to be defined between 20 frames before and 5 frames after the labelled time (based on the study in [5]). Finally, because the BSL-1K test set is classunbalanced, we report performances averaged over the test classes.

Comparison to baselines
In this section, we evaluate different components of our approach. We first compare our contrastive learning approach with classification baselines. Then, we investigate the effect of our multiple-instance loss formulation. Finally, we report performance on a sign spotting benchmark. I3D baselines. We start by evaluating baseline I3D models trained with classification on the task of spotting, using the embeddings before the classification layer. We have three variants in Tab. 2: (i) I3D BSL-1K pro-  Table 2: The effect of the loss formulation: Embeddings learned with the classification loss are suboptimal since they are not trained for matching the two domains. Contrastive-based loss formulations (NCE) significantly improve, particularly when we adopt the multiple-instance variant introduced as our Watch-Read-Lookup framework of multiple supervisory signals. We report the relatively cheaper MLP-based models with three random seeds for each model and report the mean and the standard deviation.
vided by [5] which is trained only on the BSL-1K dataset, and we also train (ii) I3D BslDict and (iii) I3D BSL-1K,BslDict . Training only on BslDict (I3D BslDict ) performs significantly worse due to the few examples available per class and the domain gap that must be bridged to spot co-articulated signs, suggesting that dictionary samples alone do not suffice to solve the task. We observe improvements with fine-tuning I3D BSL-1K jointly on the two datasets (I3D BSL-1K,BslDict ), which becomes our base feature extractor for the remaining experiments to train a shallow MLP.
Loss formulation. We first train the MLP parameters on top of the frozen I3D trunk with classification to establish a baseline in a comparable setup. Note that, this shallow architecture can be trained with larger batches than I3D. Next, we investigate variants of our loss to learn a joint sign embedding between BSL-1K and Bsl-Dict video domains: (i) standard single-instance In-foNCE [46, 63] loss which pairs each BSL-1K video clip with one positive BslDict clip of the same sign, (ii) Watch-Lookup which considers multiple positive dictionary candidates, but does not consider subtitles (therefore limited to the annotated video clips). Tab. 2 summarises the results. Our Watch-Read-Lookup formulation which effectively combines multiple sources of supervision in a multiple-instance framework outperforms the other baselines in both seen and unseen protocols.
Extending the vocabulary. The results presented so far were using the same vocabulary for both continuous and dictionary datasets. In reality, one can assume access to the entire vocabulary in the dictionary, but obtaining annotations for the continuous videos is prohibitive. Tab. 3 investigates removing the vocabulary limit on the dictionary side, but keeping the continuous annotations vocabulary at 800 signs. We show that  using the full 9k vocabulary from BslDict improves the results on the unseen setting. BSL-1K sign spotting benchmark. Although our learning framework primarily targets good performance on unseen continuous signs, it can also be naively applied to the (closed-vocabulary) sign spotting benchmark proposed by [5]. The sign spotting benchmark requires a model to localise every instance belonging to a given set of sign classes (334 in total) within long sequences of untrimmed footage. The benchmark is challenging because each sign appears infrequently (corresponding to approximately one positive instance in every 90 minutes of continuous signing). We evaluate the performance of our Watch-Read-Lookup model and achieve a score of 0.170 mAP, outperforming the previous state-of-the-art performance of 0.160 mAP [5].

Ablation study
We provide ablations for the learning hyperparameters, such as the batch size and the temperature; the mouthing confidence threshold as the training data se- Note that the effective size of the batch with our sampling is larger due to sampling extra video clips corresponding to subtitles.
Temperature. Finally, we analyze the impact of the temperature hyperparameter τ on the performance of Watch-Lookup. We conclude from Fig. 5 Mouthing confidence threshold at training. As explained in Sec. 3.1, the sparse annotations from the BSL-1K dataset are obtained automatically by running a visual keyword spotting method based on mouthing cues. The dataset provides a confidence value associated with each label ranging between 0.5 and 1.0. Similar to [5], we experiment with different thresholds to determine the training set. Lower thresholds result in a noisier but larger training set. From Tab. 4, we conclude that 0.5 mouthing confidence threshold performs the best. This is in accordance with the conclusion from [5].
Effect of the sliding window stride.  we investigate the effect of the stride parameter. Our window size is 16 frames, i.e., the number of input frames for the I3D feature extractor. A standard approach when extracting features from longer videos is to use a sliding window with 50% overlap (i.e., stride of 8 frames). However, this means the temporal resolution of the search space is reduced by a factor of 8, and a stride of 8 may skip the most discriminative moment since a sign duration is typically between 7-13 frames (but can be shorter) [48] in continuous signing video. In Tab. 5, we see that we can gain a significant localisation improvement by computing the similarities more densely, e.g., stride of 4 frames may be sufficiently dense. In our experiments, we use stride 1. We refer to Appendix B for additional ablations.

Applications
In this section, we investigate three applications of our sign spotting method. Sign variant identification. We show the ability of our model to spot specifically which variant of the sign was used. In Fig. 6, we observe high similarity scores when the variant of the sign matches in both BSL-1K and BslDict videos. Identifying such sign variations "One of Britain's worst cases of animal cruelty." Sign: Animal Time frames Similarity score "I've never known you talk like this before, Johnnie. It's mad!" Sign: Before Fig. 6: Sign variant identification: We plot the similarity scores between BSL-1K test clips and BslDict variants of the sign "animal" (left) and "before" (right) over time. A high similarity occurs for the first two rows, where the BslDict examples match the variant used in BSL-1K. The labelled mouthing times from [5] are shown by red vertical lines and approximate windows for signing times are shaded. Note that neither the mouthing annotations (ground truth) nor the dictionary spottings provide the duration of the sign, but only a point in time where the response is highest. The mouthing peak (red vertical line) tends to appear at the end of the sign (due to the use of LSTM in visual keyword spotter). The dictionary peak (blue curve) tends to appear in the middle of the sign.  We visually inspect the quality of the dictionary spottings with which we obtain cases of multiple words per subtitle spotted. The predicted locations of the signs correspond to the peak similarity scores. Note that unlike in Fig. 6, we cannot overlay the ground truth since the annotations using the mouthing cues are not dense enough to provide ground truth sign locations for 3 words per subtitle. allows a better understanding of regional differences and can potentially help standardisation efforts of BSL. Dense annotations. We demonstrate the potential of our model to obtain dense annotations on continuous sign language video data. Sign spotting through the use of sign dictionaries is not limited to mouthings as in [5] and therefore is of great importance to scale up datasets for learning more robust sign language mod-els. In Fig. 7, we show qualitative examples of localising multiple signs in a given sentence in BSL-1K, where we only query the words that occur in the subtitles, reducing the search space. In fact, if we assume the word to be known, we obtain 83.08% sign localisation accuracy on BSL-1K with our best model. This is defined as the number of times the maximum similarity occurs within -20/+5 frames of the end label time provided by [5].   "Faux Amis". There are works investigating lexical similarities between sign languages manually [6,52]. We show qualitatively the potential of our model to discover similarities, as well as "faux-amis" between different sign languages, in particular between British (BSL) and American (ASL) Sign Languages. We retrieve nearest neighbors according to visual embedding similarities between BslDict which has a 9K vocabulary and WLASL [39], an ASL isolated sign language dataset with a 2K vocabulary. We provide some examples in Fig. 8. We automatically identify several signs with similar manual features some of which correspond to different meanings in English (left), as well as same meanings, such as "ball", "stand", "umbrella" (right).

Sign language recognition
As demonstrated qualitatively in Sec. 4.5, we can reliably obtain automatic annotations using our sign spotting technique when the search space is reduced to candidate words in the subtitle. A natural way to exploit our method is to apply it on the BSL-1K training set in conjunction with the weakly-aligned subtitles to collect new localised sign instances. This allows us to train a sign recognition model: in this case, to retrain the I3D architecture from [5] which was previously supervised only with signs localised through mouthings.
BSL-1K automatic annotation. Similar to our previous work using mouthing cues [5], where words in the subtitle were queried within a neighborhood around the subtitle timestamps, we query each subtitle word if they fall within a predefined set of vocabulary. In particular, we query words and phrases from the 9K BslDict vocabulary if they occur in the subtitles. To determine whether a query from the dictionary occurs in the subtitle, we apply several checks. We look for the original word or phrase as it appears in the dictionary, as well as its text-normalised form (e.g., "20" becomes "twenty"). For the subtitle, we look for its original, text-normalised, and lemmatised forms. Once we find a match between any form of the dictionary text and any form of the subtitle text, we query the dictionary video feature within the search window in the continuous video features. We use search windows of ±4 seconds padding around the subtitle timestamps. We compute the similarity between the continuous signing search window and each of the dictionary variants for a given word: we record the frame location of maximum similarity for all variants and choose the best match as the one with highest similarity score. The final sign localisations are obtained by filtering the peak similarity scores to those above 0.7 threshold -resulting in a vocabulary of 4K signs -and taking 32 frames centered around the peak location. Fig. 9 summarises several statistics computed over the training set. We note that sign spotting with dictionaries (D) is more effective than with mouthing (M) in terms of the yield (510K versus 169K localised signs). Since, D can include duplicates from M, we further report the number of instances for which a mouthing spotting for the same keyword query exists within the same search window. We find that the majority of our D spottings repre-Yield for 1K query vocabulary Yield for 9K query vocabulary Fig. 9: Statistics on the yield from the automatic annotations: We plot the vocabulary size (left) and the number of localised sign instances (middle) and (right) over several similarity thresholds for the new automatic annotations in the training set that we obtain through dictionaries. While we obtain a large number of localised signs (783K at 0.7 threshold) for the full 9K vocabulary, in our recognition experiments we use a subset of 510K annotations that correspond to the 1K vocabulary. To approximately quantify the amount of annotations that represent duplicates from those found through mouthing cues, we count those localisations for which the same keyword exists for mouthing annotations within the same search window. We observe that the majority of the annotations are new (783K vs 122K).
sent new, not previously localised instances (see Fig. 9 right).
BSL-1K sign recognition benchmark. We use the BSL-1K manually verified recognition test set with 2K samples [5], which we denote with Test Rec 2K , and significantly extend it to 37K samples as Test Rec 37K . We do this by (a) running our dictionary-based sign spotting technique on the BSL-1K test set and (b) verifying the predicted sign instances with human annotators using the VIA tool [25] as in [5]. Our goal in keeping these two divisions is three-fold: (i) Test Rec 2K is the result of annotating "mouthing" spottings above 0.9 confidence, which means the models can largely rely on mouthing cues to recognise the signs. The new Test Rec 37K annotations have both "mouthing" (10K) and "dictionary" (27K) spottings. The dictionary annotations are the result of annotating dictionary spottings above 0.7 confidence from this work; therefore, models are required to recognise the signs even in the absence of mouthing, reducing the bias towards signs with easily spotted mouthing cues. (ii) Test Rec 37K spans a much larger fraction of the training vocabulary as seen in Tab. 1, with 950 out of 1,064 sign classes (vs only 334 classes in the original benchmark Test Rec 2K of [5]). (iii) We wish to maintain direct comparison to our previous work [5]; therefore, we report on both sets in this work.
Comparison to prior work. In Tab. 6, we compare three I3D models trained on mouthing annotations (M), dictionary annotations (D) , and their combination (M+D  Table 6: An improved I3D sign recognition model: We find signs via automatic dictionary spotting (D), significantly expanding the training and testing data obtained from mouthing cues by [5] (M). We also significantly expand the test set by manually verifying these new automatic annotations from the test partition (Test Rec 2K vs Test Rec 37K ). By training on the extended M+D data, we obtain state-of-the-art results, outperforming the previous work of [5]. §The slight improvement in the performance of [5] over the original results reported in that work is due to our denser testtime averaging when applying sliding windows (8-frame vs 1-frame stride). vs 76.6%). This may be due to the strong bias towards mouthing cues in the small test set Test Rec 2K . Second, the benefits of combining annotations from both can be seen in the sign classifier trained using 678K automatic annotations. This obtains state-of-the-art performance on Test Rec 2K , as well as the more challenging test set Test Rec 37K . All three models in the table (M, D, M+D) are pretrained on Kinetics [16], followed by video pose distillation as described in [5]. We observed no improve-  Table 7: Recognition ablations: We train on a portion of the automatic annotations obtained through dictionaries. We filter the localisations which have more than a similarity threshold: 0.7, 0.8, 0.9. We find that lower threshold results in a larger training and increased performance. On the other hand, restricting the annotations to only those which fall within the subtitle timestamps without temporal padding for the search window reduces the accuracy.
ments when initialising M+D training from M-only pretraining.
Our results can be interpreted as bootstrapping from an initial model, which has access to a large audio-visual training set with mouthing annotations. The M recognition model has distilled this information while incorporating manual patterns. The Watch-Read-Lookup framework has mainly relied on these mouthing locations to learn matching with dictionary samples. The D recognition model is the result of this series of annotation expansion. The final recognition model therefore exploits multiple sources of supervision. We refer to our recent work [58] for a complementary way of expanding the automatic annotations. There, we introduce an attention-based sign localisation where the localisation ability emerges from a sequence prediction task. Sign recognition ablations. In Tab. 7 we provide further ablations for training the recognition models based on automatic dictionary spotting annotations. In particular, we investigate (i) the similarity threshold that determines the amount of training data, as well as the noise, and (ii) no padding versus ±4-second padding to the subtitle locations when defining the search window. We see in Tab. 7 that filtering the sign annotations with a high threshold such as 0.9, denoted with D .9 , drastically reduces the training size (from 510K to 36K) which in return results in poor recognition performance. The accuracy with D .7 is slightly above that of D .8 . Moreover, both the performance and the training size decreases if we restrict the sign annotations to those which fall within the subtitle timestamps, i.e., no temporal padding in the search window when applying sign spotting. We retain a similarity threshold of 0.7 and a ±4-second padding for our final model.   We repeated our sign spotting techniques on this data using mouthing and dictionary cues in combination with subtitles. Keyword spotting with mouthing follows our previous work [5] and obtains 502K sign localisations over 0.5 confidence (M 0.5 ). Sign spotting with dictionaries is similar to the procedure described in Sec. 4.6, resulting in 727K sign localisations over 0.75 similarity (D 0.75 ).
In Tab. 8, we present sign recognition results using these automatic annotations for classification training over a vocabulary of 2281 categories. The BOBSL test set contains 25,045 manually verified signs obtained through both types of spotting techniques. We experiment with various sets of annotations for training. We observe that mouthing (M) and dictionary (D) spottings are complementary. Similar to Tab. 7, we find that lowering the similarity threshold improves the performance for D-only training. However, when combined into a significantly larger training set (i.e., a total of 1.2 million clips with low thresholds), this improvement disappears (75.8% top-1 accuracy for both). We plot the per-word verification accuracies on BOBSL for top-and bottom-40 words sorted according to accuracies. Each bar also shows the total number of manual verifications performed for the corresponding word. While many words have close to 100% accuracy, certain words fail drastically such as 'dvd' and 'dna', mainly due to being fingerspelled signs. Other words such as 'even' and 'able' may have different meanings depending on context. Note that the query word here is obtained from subtitles.

Limitations.
In this section, we investigate failure modes of our sign spotting mechanism, in particular by using the data obtained through manual verifications. More specifically, we make statistics from 10K annotations on the BOBSL test set that were obtained via dictionary spotting through subtitles. From these, 76% were marked as correct. In Fig. 10 Fig. 11: Qualitative analysis: We visualise sample spotting results from the manually verified set of BOBSL that obtain the highest similarity scores (overlaid on the spotted frame). Note that the query word is obtained from the subtitles. Top and bottom blocks represent success and failure cases, respectively. We notice weak hand shape and motion similarities in the failing examples. The figure only shows the middle frame for each video; therefore, we provide video visualisations at https://www.robots.ox.ac.uk/~vgg/ research/bsldict/viz/viz_bobsl_dicts.html.
word accuracy to check whether certain signs fail more than others. We note two main failure modes: (i) fingerspelled words (e.g., 'dvd', 'dna') are difficult for the model, perhaps due to sparse frame sampling from long dictionary videos, (ii) common words such as 'even' and 'able' may have context-dependent meanings; querying these words due to occurrence in subtitles lead to false positives.
In Fig. 11, we further visualise samples from this manually verified set of spottings. We focus on cases where high similarities occur and group the examples into success (top) and failure (bottom) cases. Within failures, we observe weak hand shape and motion similarities. As previously mentioned, this might be due to querying a word for which a sign correspondence does not exist within the temporal search window. Future work may develop a mechanism to determine which words to query from the subtitle to ensure correspondence with a sign, so that the problem only becomes localisation.

Conclusions
We have presented an approach to spot signs in continuous sign language videos using visual sign dictionary videos, and have shown the benefits of leveraging multiple supervisory signals available in a realistic setting: (i) sparse annotations in continuous signing (in our case, from mouthings), (ii) accompanied with subtitles, and (iii) a few dictionary samples per word from a large vocabulary. We employ multiple-instance contrastive learning to incorporate these signals into a unified framework. We finally propose several potential applications of sign spotting and demonstrate its ability to scale up sign language datasets for training strong sign language recognition models.

APPENDIX
This appendix provides additional qualitative (Sec. A) and experimental results (Sec. B), as well as detailed explanations of the training of our Watch-Read-Lookup framework (Sec. C).

A Qualitative Results
Please watch our video in the project webpage 4 to see qualitative results of our model in action. We illustrate the sign spotting task, as well as the specific applications considered in the main paper: sign variant identification, densification of annotations, and "faux amis" identification between languages.

B Additional Experiments
In this section, we present complementary experimental results to the main paper. We report the effect of classbalancing (Sec. B.1), domain-specific layers (Sec. B.2), language-aware negative sampling (Sec. B.3), and the trunk network architecture (Sec. B.4).  sampling continuous sequences, i.e., class-balancing the minibatches. Thus, all but one of the labelled samples in the batch can be used as negatives for a given dictionary bag corresponding to a labelled sample. Note that this approach limits the batch size to be less than or equal to the number of sign classes. Tab. A.1 experiments with the sampling strategy. We observe that the performance is not significantly different with/without class-balanced sampling for various batch sizes.

B.2 Domain-specific layers
As noted in the main paper, the videos from the continuous signing and from the dictionaries differ significantly, e.g., continuous signing data is faster than the dictionary signing, and is co-articulated whereas the dictionary has isolated signs. Given such a domain gap, we explore whether it is beneficial to learn domainspecific MLP layers: one for the continuous, and one for the dictionary. Tab. A.2 presents a comparison between domain-specific layers versus shared parameters. We do not observe any gains from such separation. Therefore, we keep a single MLP for both domains for simplicity.

B.3 Language-aware negative sampling
Working with a large vocabulary of words brings the additional challenge of handling synonyms. We consider two types of similarities. First, two different categories in the BslDict sign dictionary may belong to the same sign category if the corresponding English words are synonyms. Second, the meta-data we have collected with the BslDict dataset provides similarity labels between sign categories, which may be used to group certain signs. In this work, we have largely ignored this issue by associating each sign to a single word. This results in constructing negative pairs for two identical signs such as 'happy' and 'content'. Here, we explore whether it is beneficial to discard such pairs during training, instead of marking them as negatives.  [16] with the S3D [64] architecture for the task of sign language recognition, in a comparable setup to [5]. We use the last 20 frames before the mouthing annotations with confidence above 0.5. We do not obtain gains with the S3D architecture; therefore, we use I3D in all the experiments to compute video features. As shown in Tab A.4, we compare two popular architectures for computing video representations. We have used I3D [16] in all our experiments. Here, we also train a 1064-way classification with the S3D architecture [64] on BSL-1K as in [5] for sign language recognition. We do not observe improvements with S3D (in practice we found that it overfit the training set to a greater degree); therefore, we use an I3D trunk. Note that the hyperparameters (e.g., learning rate) are tuned for I3D and kept the same for S3D.

C Training Details
In this section, we cover architectural details (Sec. C.1), a detailed formulation of our positive/negative bag sampling strategy (Sec. C.2) and a brief description of the infrastructure used to perform the experiments in the main paper (Sec. C.3). We detail the layers of our embedding architecture. We freeze the I3D trunk and use it as a feature extractor. We only train the MLP layers with our loss formulation in the proposed framework. The same layers (and parameters) are used both for the dictionary video inputs and the continuous signing video inputs.

C.1 Architectural details
As explained in the main paper, our sign embeddings correspond to the output of a two-stage architecture: (i) an I3D trunk, and (ii) a three-layer MLP. We first train the I3D on both labelled continuous video clips and the dictionary videos jointly. We then freeze the I3D trunk and use it as a feature extractor. We only train the MLP layers with our loss formulation in the Watch-Read-Lookup framework.
I3D trunk. We first train the I3D parameters only with the BSL-1K annotated clips that have mouthing confidences more than 0.5. For 1064-class training, we use the publicly available model from [5]; for 800-class training, we perform our own training, also first pretraining with pose distillation. We then re-initialise the batch normalization layers (as noted in Sec. 2 of the main paper). We finetune the model jointly on BSL-1K annotated clips (the ones with mouthing confidence more than 0.8) and Bsl-Dict samples. The sampling frequency for the two data sources are balanced. In the I3D classification pretraining phase, we treat each dictionary video independently with its corresponding label. We observe that the 1064way classification performance on the training dictionary videos remain at 48.09% per-instance top-1 accuracy without the batch normalization re-initialization, as opposed to 78.94%. We also experimented with domain-specific batch normalization layers [18], but the training accuracy for the dictionary videos was still low (62.73%).
As detailed in Sec. 3.3 of the main paper, we subsample the dictionary videos to roughly match their speed to the continuous signing videos. This subsampling includes a random shift and a random fps. We observe a decrease of 6.68% in the training dictionary classification accuracy if we instead sample 16 consecutive frames from the original temporal resolution, which is not sufficient to capture the full extent of a sign because one dictionary video is 56 frames on average. Fig. A.1 illustrates the layers considered for our MLP architecture. It consists of 3 fully connected layers with LeakyRelu activations between them. The first linear layer also has a residual connection on the 1024dimensional input features. We then reduce the dimensionality gradually to 512 and 256 for efficient training and testing.

C.2 Positive/Negative bag sampling formulations
In the main paper, we described two approaches for sampling positive/negative MIL bags in Sec. 3.2. Due to space constraints, the sampling mechanisms were described at a high-level. Here, we provide more precise definitions of each bag. In addition to the set notation below, we include in the code release, the loss implementation as a PyTorch [47] function in loss/loss.py, together with a sample input (loss/sample inputs.pkl) comprising embedding outputs from the MLP for continuous and dictionary videos.
As noted in the main paper, we do not have access to positive pairs because: (1) for the segments of videos in S that are annotated (i.e. (x k , v k ) ∈ M), we have a set of potential sign variations represented in the dictionary (annotated with the common label v k ), rather than a single unique sign; (2) since S provides only weak supervision, even when a word is mentioned in the subtitles we do not know where it appears in the continuous signing sequence (if it appears at all). These ambiguities motivate a Multiple Instance Learning [24] (MIL) objective. Rather than forming positive and negative pairs, we instead form positive bags of pairs, P bags , in which we expect at least one segment from a video from S (or a video from M when labels are available) and a video D to contain the same sign, and negative bags of pairs, N bags , in which we expect no pair of video segments from S (or M) and D to contain the same sign. To incorporate the available sources of supervision into this formulation, we consider two categories of positive and negative bag formations, described next. Each bag is formulated as a set of paired indices-the first value indexes into the collections of continuous signing videos (either S or M, depending on context) and the second value indexes into the set of dictionary videos contained in D.
Watch and Lookup: using sparse annotations and dictionaries. In the first formulation, Watch-Lookup, we only make use of D and M (and not S) to learn the data representation f . We define positive bags in two ways: (1) by anchoring on the labelled segment (2) i.e. each bag consists of a labelled temporal segment and the set of sign variations of the corresponding word in the dictionary (illustrated in Fig. A.2 (i), top row), or by (2) anchoring on the dictionary samples that correspond to the labelled segment, to define a second set P bags(dict) watch,lookup , which takes a mathematically identical form to P bags(seg) watch,lookup (i.e. each bag consists of the set of sign variations of the word in the dictionary that corresponds to a given labelled temporal segment, illustrated in Fig. A.2 (ii), top row). The key assumption in both cases is that each labelled segment matches at least one sign variation in the dictionary. Negative bags can be constructed by (1) anchoring on labelled segments and selecting dictionary examples corresponding to different words (Fig. A.2 (i), red examples); (2) anchoring on the dictionary set for a given word and selecting labelled segments of a different word (Fig. A.2 (ii), red example). These sets manifest as for the former and as for the latter. The complete set of positive and negative bags is formed via the unions of these collections: Watch, Read and Lookup. The Watch-Lookup bag formulation defined above has a significant limitation: the data representation, f , is not encouraged to represent signs beyond the initial vocabulary represented in M. We therefore look at the subtitles present in S (which contain words beyond M) in addition to M to construct bags. To do so, we introduce an additional piece of terminology-when considering a subtitled video for which only one segment is labelled, we use the term "foreground" to refer to the subtitle word that corresponds to the label, and "background" for words which do not possess labelled segments in the video. Similarly to Watch-Lookup, we can construct positive bags, P bags watch,lookup (Fig. A.3 (i) and (ii), top rows) which correspond to the use of foreground subtitle words. However, these can now by extended by (a) anchoring on a background segment in the continuous footage and find candidate matches in the dictionary among all possible matches for the subtitles words (Fig. A.3 (iii), top row) and (b) anchoring on dictionary entries for background subtitle words (Fig. A.3 (iv), top row). Formally, let Tokenize(·) : S → V L denote the function which extracts words from the subtitle that are present in the vocabulary: Tokenize(s) {w ∈ s : w ∈ V L }. Then define background segment-anchored positive bags as: P bags(seg-back) watch,read,lookup = {{i} × B i : i.e. each bag contains a background segment from the continuous signing which is paired with all dictionary segments whose labels match any token from the corresponding subtitle sentence (visualised as the top row of Fig. A.3 (iii)). Next, we define dictionary-anchored positive background bags as follows: i.e. the bags contain all pairwise combinations of dictionary entries for a given word and segments in continuous signing whose subtitle contains that background word (visualised as top row of Fig. A.3 (iv)). We combine these bags with the Watch-Lookup positive bags to maximally exploit the available supervisory signal for positives: (a) Input: We illustrate an example minibatch formation for our Watch-Read-Lookup framework. We sample continuous videos with only one labelled segment, which we refer to as the 'foreground' word (e.g., friend, language). Each continuous video has a subtitle, which we use to sample additional words for which we do not have continuous signing labels, ('background' words), e.g. name and what for "what is your friend's name?". We sample all the dictionary videos corresponding to these words. Each word has multiple dictionary instances grouped into overlapping circles. For example, (iii) anchoring at the continuous background marks the dictionary video for name positive, because it appears in the subtitle, but it is not within the annotated temporal window. All other dictionary samples friend, language, speak become negative to this anchor. We repeat this for each dictionary background, i.e., marking what as positive. See text for detailed explanations on each case. We also provide a video animation at our project page to show all possible positive/negative pairs for cases (i) to (iv).
To counterbalance the positives, we use S in combination with M and D to create four kinds of negative bags. Differently to positive sampling, negatives can be constructed across the full minibatch rather than solely from the current (subtitled video, dictionary) pairing. We first anchor negatives bags on foreground segments: (10) so that they contain pairs between a given foreground segment and all available dictionary videos whose label does not match the segment (visualised in Fig. A.3 (i), both rows). We next anchor on the foreground dictionary videos: comprising of pairings between the dictionary foreground set and segments within the minibatch that are either labelled with a different word, or can be excluded as a potential match through the subtitles (Fig. A.3 (ii), both rows). Next, we anchor on the background continuous segments: N bags(seg-back) watch,read,lookup = {{i} × B i : ∃(x k , s k ) ∈ S, which amounts to the pairings between each background segment and the set of dictionary videos which do not correspond to any of the words in the background subtitles (Fig. A.3 (iii), both rows). The fourth negative bag set construction anchors on the background dictionaries: and thus the pairings arise between dictionary examples for a background segment and its corresponding foreground segment, as well all segments from other batch elements (Fig. A.3 (iv), both rows). These four sets of bags are combined to form the full negative bag set: In the main paper, these bag formulations are used through Eqn. (1) (the MIL-NCE loss function) to guide learning. Concretely, the Watch-Lookup framework defines positive and negative bags via P bags = P bags watch,lookup , N bags = N bags watch,lookup and the Watch-Read-Lookup formulation instead defines the positive and negative bags via P bags = P bags watch,read,lookup , N bags = N bags watch,read,lookup .

C.3 Infrastructure and runtime
Training. The I3D trunk BSL-1K pretraining experiments were performed with four Nvidia M40 graphics cards and took 2-3 days to complete. After freezing the I3D trunk, training the parameters of the MLP with the Watch-Read-Lookup framework took approximately two hours on a single Nvidia M40 graphics card. Inference. Our sign spotting demo available online (link at our project page) runs at real time in case of GPU availability. A single forward pass from the I3D and MLP layers takes 0.016 seconds to process 16 video frames on a single Nvidia M40 GPU, which is roughly 1000 frames per second (much more than the 25 fps real time capture speed). However, our current models (both for spotting and recognition) rely on the I3D model, which is a 3D convolutional neural network with about 15M parameters. Future work can focus on compressing these heavy models into more lightweight architectures for mobile applications.