Of Mice and Mates: Automated Classification and Modelling of Mouse Behaviour in Groups using a Single Model across Cages

Behavioural experiments often happen in specialised arenas, but this may confound the analysis. To address this issue, we provide tools to study mice in the home-cage environment, equipping biologists with the possibility to capture the temporal aspect of the individual's behaviour and model the interaction and interdependence between cage-mates with minimal human intervention. Our main contribution is the novel Group Behaviour Model (GBM) which summarises the joint behaviour of groups of mice across cages, using a permutation matrix to match the mouse identities in each cage to the model. In support of the above, we also (a) developed the Activity Labelling Module (ALM) to automatically classify mouse behaviour from video, and (b) released two datasets, ABODe for training behaviour classifiers and IMADGE for modelling behaviour.


Introduction
Understanding behaviour is a key aspect of biology, psychology and social science, e.g. for studying the effects of treatments [1], the impact of social factors [32] or the link with genetics [5].Biologists often turn to model organisms as stand-ins, of which mice are a popular example, on account of their similarity to humans in genetics, anatomy and physiology [58].Traditionally, biological studies on mice have taken place in carefully controlled experimental conditions [58], in which individuals are removed from their home-cage, introduced into a specific arena and their response to stimuli (e.g.other mice) investigated: see e.g. the work of [52,16,2,48,59,57,28].This is attractive because: (a) it presents a controlled stimulus-response scenario that can be readily quantified [10], and (b) it lends itself easier to automated means of behaviour quantification e.g. through top-mounted cameras in a clutter-free environment [57,52,48,59].
The downside of such 'sterile' environments is that they fail to take into account all the nuances in their behaviour [26].Such stimuli-response scenarios presume a simple forward process of perception-action which is an over-simplification of their agency [26].Moreover, mice are highly social creatures, and isolating them for specific experiments is stressful and may confound the analysis [4,18].For these reasons, research groups, such as the International Mouse Phenotype Consortium [9] and TEATIME cost-action 1 amongst others, are advocating for the long-term analysis of rodent behaviour in the home-cage.This is aided by the proliferation of home-cage monitoring systems, but is hampered by the shortage of automated means of analysis.
In this work, we tackle the problem of studying mice in the home-cage, giving biologists tools to analyse the temporal aspect of an individual's behaviour and model the interaction between cage-mates -while minimising disruption due to human intervention.Our contributions are: (a) a novel Global Behaviour Model (GBM) for detecting patterns of behaviour in a group setting across cages, (b) the Activity Labelling Module (ALM), an automated pipeline for inferring mouse behaviours in the home-cage from video, and (c) two datasets, ABODe for automated activity classification and IMADGE for analysis of mouse behaviours, both of which we make publicly available.
In this paper, we first introduce the reader to the relevant literature in Sec. 2. Section 3 describes the nature of our data, including the curation of two publicly available datasets: this allows us to motivate the methods which are detailed in Sec. 4. We continue by describing the experiments during model fitting and evaluation in Sec. 5 and conclude with a discussion of future work (Sec.6).

Related Work 2.1 Experimental Setups
Animal behaviour has typically been studied over short periods in specially designated arenas -see e.g.[52,2,16] and under specific stimulus-response conditions [48].This simplifies data collection, but may impact behaviour [3] and is not suited to the kind of long-term studies in which we are interested.Instead, newer research uses either an enriched cage [28,44,34,54] or, as in our case, the home-cage itself [4,20].The significance of the use of the home-cage cannot be overstated.It allows for capturing a wider plethora of nuanced behaviours with minimal intervention and disruption to the animals, but it also presents greater challenges for the automation of the analysis, and indeed, none of the systems we surveyed perform automated behaviour classification for individual mice in a group-housed setting.
Concerning the number of observed individuals, single-mice experiments are often preferred as they are easier to phenotype and control [59,28,44,54].However, mice are highly social creatures and isolating them affects their behaviour [18], as does handling (often requiring lengthy adjustment periods).Obviously, when modelling social dynamics, the observations must perforce include multiple individuals.Despite this, there are no automated systems that consider the behaviour of each individual in the home-cage as we do.Most research is interested in the behaviour of the group as a whole [2,39,11,30], which circumvents the need to identify the individuals.[14] do model a group setting, but focus on the mother only and how it relates to its litter: similarly, the social interaction test [2,53] looks at the social dynamics, but only from the point of view of a resident/intruder and in a controlled setting.While [25,20] and [19] do model interactions, their setup is considerably different in that (a) they use specially-built arenas (not the home-cage), (b) use a top-mounted camera (which is not possible in the home-cage) and (c) classify positional interactions (e.g.Nose-to-Nose, Head-to-Tail etc..., based on fixed proximity/pose heuristics) and not the type of individual activity (e.g.Feeding, Drinking, Grooming etc...).

Automated Behaviour Classification
Classifying animal behaviour has lagged behind that of humans, with even recent work using manual labels [16,14,37].Automated methods often require heavy data engineering [21,2,30,28].Animal behaviour inference tends to be harder because human actions are more recognisable [34], videos are usually less cluttered [29] and most challenges in the human domain focus on classifying short videos rather than long-running recordings as in animal observation [30].Another factor is the limited number of publicly available animal observation datasets that target the home-cage.Most -RatSI [38], MouseAcademy [48], CRIM13 [11], MARS [53], PDMB [30], CalMS21 [55] and MABe22 [56] -use a top-mounted camera in an open field environment: in contrast, our side-view recording of the home-cage represents a much more difficult viewpoint with significantly more clutter and occlusion.Moreover, PDMB only considers pose information, while CRIM13, MARS and CalMS21 deal exclusively with a resident-intruder setup, focusing on global interactions between the two mice rather than individual actions.We aim, by releasing ABODe (Sec.3.3), to fill this gap.

Modelling Mouse Behaviour
The most common form of behaviour analysis involves reporting summary statistics: e.g. of the activity levels [23], the total duration in each behaviour [20] or the number of bouts [53], effectively throwing away the temporal information.Even where temporal models are used as by [2], this is purely as an aid to the behaviour classification with statistics being still reported in terms of total duration in each state (behaviour).This approach provides an incomplete picture, and one that may miss subtle differences [50] between individuals/groups.Some research output does report ethograms of the activities/behaviours through time [5,53,46] -and [4] in particular model this through sinusoidal functionsbut none of the works we surveyed consider the temporal co-occurrence of behaviours between individuals in the cage Figure 1: An example video frame from our data, showing the raw video (left) and an enhanced visual (right) using CLAHE [61].In the latter, the hopper is marked in yellow and the water spout in purple, while the (RFID) mouse positions are projected into image space and overlaid as red, green and blue dots.as we do.For example, in MABe22, although up to three mice are present, the four behaviour labels are a multi-label setup, which indicate whether each action is evidenced at each point in time, but not which of the mice is the actor.This limits the nature of the analysis as it cannot capture inter individual dynamics, which is where our analysis comes in.
An interesting problem that emerges in biological communities is determining whether there is evidence of different behavioural characteristics among individuals/groups [37,58,14] or across experimental conditions [50,4].Within the statistics and machine learning communities, this is typically the domain of anomaly detection for which [17] provide an exhaustive review.This is at the core of most biological studies and takes the form of hypothesis testing for significance [14].The limiting factor is often the nature of the observations employed, with most studies based on frequency (time spent or counts) of specific behaviours [18,24].The analysis by [14] uses a more holistic temporal viewpoint, albeit only on individual mice, while our models consider multiple individuals.[59] employ Hidden Markov Models (HMMs) to identify prototypical behaviour (which they compare across environmental and genetic conditions) but only consider pose features -body shape and velocity -and do so only for individual mice.To our knowledge, we are the first to use a global temporal model inferred across cages to flag 'abnormalities' in another demographic.

Datasets
A key novelty of this work relates to the use of continuous recordings of group-housed mice in the home-cage.In line with the Reduction strategy of the 3Rs [51] we reuse existing data already recorded at the Mary Lyon Centre at MRC Harwell, Oxfordshire (MLC at MRC Harwell).In what follows, we describe the modalities of the data (Sec.3.1), documenting the opportunities and challenges this presents, as well as our efforts in curating and releasing two datasets to solve the behaviour modelling (Sec.3.2) and classification (Sec.3.3) tasks.

Data Sources
We use continuous three-day video and position recordings -captured using the home-cage analyses system of Bains et al. [4] -of group-housed mice of the same sex (male) and strain (C57BL/6NTac).

Husbandry
The mice are housed in groups of three as a unique cage throughout their lifetime.To reduce the possibility of impacting social behaviour [26], the mice have no distinguishing external visual markings: instead, they are microchipped with unique Radio-Frequency Identification (RFID) tags placed in the lower part of their abdomen.All recordings happen in the group's own home-cage, thus minimising disruption to their life-cycle.Apart from the mice, the cage contains a food and drink hopper, bedding and a movable tunnel (enrichment structure), as shown in Fig. 1.For each cage (group of three mice), three to four day continuous recordings are performed when the mice are 3-months, 7-months, 1-year and 18-months old.During monitoring, the mice are kept on a standard 12-hour light/dark cycle with lights-on at 07:00 and lights-off at 19:00.

Modalities
The recordings (video and position) are split into 30-minute segments to be more manageable.Experiments are thus uniquely identified by the cage-id to which they pertain, the age group at which they are recorded and the segment number.
A single-channel infra-red camera captures video at 25 frames per second from a side-mounted viewpoint in 1280 × 720 resolution.Understandably, the hopper itself is opaque and this impacts the lighting (and ability to resolve objects) in the lower right quadrant.As regards cage elements, the hopper itself is static, and the mice can feed either from the left or right entry-points.The water-spout is on the left of the hopper towards the back of the cage from the provided viewpoint.The bedding itself consists of shavings and is highly dynamic, with the mice occasionally burrowing underneath it.Similarly, the cardboard tunnel roll can be moved around or chewed and varies in appearance throughout recordings.This clutter, together with the close confines of the cage, lead to severe occlusion, even between the mice themselves.
With no visual markings, the mice are only identifiable through the implanted RFID tag, which is picked up by a 3 × 6 antenna-array below the cage.For visualisation purposes (and ease of reference), mice within the same cage are sorted in ascending order by their identifier and denoted Red, Green and Blue.The antennas are successively scanned in numerical order to test for the presence of a mouse: the baseplate does on average 2.5 full-scans per-second, but this is synchronised to the video frame rate.The RFID pickup itself suffers from occasional dropout, especially when the mice are above the ground (e.g.climbing or standing on the tunnel) or in close proximity (i.e. during huddling).

Identifying the Mice
A key challenge in the use of the home-cage data is the correct tracking and identification of each individual in the group.This is necessary to relate the behaviour to the individual and also to connect statistics across recordings (including different age-groups).However, the close confines of the cage and lack of visible markers make this a very challenging problem.Indeed, standard methods, including the popular DeepLabCut framework of Lauer et al. [33] do not work on the kind of data that we use.
Our solution lies in the use of the Tracking and Identification Module (TIM), documented in Camilleri et al. [13].We leverage Bounding Boxes (BBoxes) output by a neural network mouse detector, which are assigned to the weak location information by solving a custom covering problem.The assignment is based on a probabilistic weight model of visibility, which considers the probability of occlusion.The TIM yields per-frame identified BBoxes for the visible mice and an indication when it is not visible otherwise.

IMADGE: A dataset for Behaviour Analysis
The Individual Mouse Activity Dataset for Group Environments (IMADGE) is our curated selection of data with the aim to provide a general dataset for analysing mouse behaviour in group settings.It includes automaticallygenerated localisation and behaviour labels for the mice in the cage, and is available at https://github.com/michael-camilleri/IMADGE for research use.The dataset also forms the basis for the ABODe dataset (Sec.3.3).

Data Selection
IMADGE contains recordings of mice from 15 cages from the Adult (1-year) and 10 cages from the Young (3-month) age-groups: nine of the cages exist in both subsets and thus are useful for comparing behaviour dynamics longitudinally.All mice are male of the C57BL/6NTac strain.Since this strain of mice is crepuscular (mostly active at dawn/dusk), we provide segments that overlap to any extent with the morning (06:00-08:00) and evening (18:00-20:00) periods (at which lights are switched on or off respectively), resulting in generally 2 1 /2 hour recording runs.This is particularly relevant, because changes in the onset/offset of activity around these times can be very good early predictors of e.g.neurodegenerative conditions [6].The runs are collected over the three-day recording period, yielding six runs per-cage, equivalent to 90 segments for the Adult and 61 segments for the Young age-groups.

Data Features
IMADGE exposes the raw video for each of the segments.The basic unit of processing for all other features, is the Behaviour Time Interval (BTI) which is one-second in duration (25 video frames).This was chosen to balance expressivity of the behaviours (reducing the probability that a BTI spans multiple behaviours) against imposing an excessive effort in annotation for training behaviour classifiers).
The main modality is the per-mouse behaviour, obtained automatically by our ALM.The observability of each mouse in each BTI is first determined: behaviour classification is then carried out on samples deemed Observable.The Figure 2: The ALM for classifying observability and behaviour per mouse.The input signal comes from three modalities: (i) coarse position (RFID), (ii) identified BBoxes (using the TIM as implemented in [13]) and (iii) video frames.An OC (iv) determines whether the mouse is observable and its behaviour can be classified.If this is the case, then the BC (v) is activated to generate a probability distribution over behaviours for the mouse.Further architectural details appear in the text.
behaviour is according to one of seven labels: Immobile, Feeding, Drinking, Self-Grooming, Allo-Grooming, Locomotion and Other.Behaviours are mutually exclusive within the BTI, but we retain the full probability score over all labels rather than a single class label.
The RFID-based mouse position per-BTI is summarised in two fields: the mode of the pickups within the BTI and the absolute number of antenna cross-overs.The BBoxes for each mouse are generated per-frame using our own TIM [13], running on each segment in turn.The per-BTI BBox is obtained by averaging the top-left/bottom-right coordinates throughout the BTI (for each mouse).

ABODe: A dataset for Behaviour Classification
Our analysis pipeline required a mouse behaviour dataset that can be used to train models to automatically classify behaviours of interest, thus allowing us to scale behaviour analysis to larger datasets.Our answer to this need is the Annotated Behaviour and Observability Dataset (ABODe).The dataset, available at https://github.com/michael-camilleri/ABODe consists of video, per-mouse locations in the frame and per-second behaviour labels for each of the mice.

Data Selection
For ABODe we used a subset of data from the IMADGE dataset.We randomly selected 200 two-minute snippets from the Adult age-group, with 100 for Training, 40 for Validation and 60 for Testing.These were selected such that data from a cage appears exclusively in one of the splits (training/validation/test), ensuring a better estimate of generalisation performance.The data was subsequently annotated by a trained phenotyper (see appendix Appendix B).

Data Features
As in IMADGE, ABODe contains the raw video (as two-minute snippets).We also provide the per-frame per-mouse RFID-based position reading and BBox location within the image (generated using TIM).
The dataset consists of 200 two-minute snippets, split as 110 Training, 30 Validation and 60 in the Test set (see Tab. B.3).To simplify our classification and analysis, the behaviour of each mouse is defined at regular BTIs, and is either Not Observable or one of seven mutually exclusive labels: Immobile, Feeding, Drinking, Self-Grooming, Allo-Grooming, Locomotion or Other.We enforce that each BTI for each mouse is characterised by exactly one behaviour: this implies both exhaustibility and mutual exclusivity of behaviours.The behaviour of each mouse is annotated by a trained phenotyper according to a more extensive labelling schema which takes into account tentative labellings and unclear annotations: this is documented in the appendix Appendix B.1.Note that unlike some other group-behaviour projects, we focus on individual actions (as above) rather than explicitly positional interactions (e.g.Nose-to-Nose, Chasing, etc...) -this is a conscious decision that is driven by the biological processes under study as informed through years of research experience at the MLC at MRC Harwell.

Methods
The main contribution of this work relates to the GBM which is described in Sec.4.2.However, obtaining behaviour labels for each of the mice necessitated development of the ALM, described in Sec.4.1.

Classifying Behaviour: the ALM
Analysing behaviour dynamics in social settings requires knowledge of the individual behaviour throughout the observation period.Our goal is thus to label the activity of each mouse or flag that it is Not Observable at discrete BTIs -in our case every second.A strong-point of our analysis is the volume of data we have access to: this allows our observations to carry more weight and be more relevant to the biologists as they are drawn from hours (rather than minutes) of observations.However, this scale of data is also challenging, making manual labelling infeasible.
As already argued in Secs.2.1 and 2.2, existing setups do not consider the side-view home-cage environment that we deal with.It was thus necessary to develop our own ALM (Fig. 2), to automatically determine whether each mouse is observable in the video, and if so, infer a probability distribution over which behaviour it is exhibiting.Using discrete time-points simplifies the problem by framing it as a purely classification task, and making it easier to model (Sec.4.2).We explicitly use a hierarchical label space (observability v. behaviour, see Fig. 2(vi)), since (a) it allows us to break down the problem using an Observability Classifier (OC) followed by a Behaviour Classifier (BC) in cascade, and (b) because we prefer to handle Not Observable explicitly as missing data rather than having the BC infer unreliable classifications which can in turn bias the modelling.It is also semantically inconsistent to treat Not Observable as a mutually exclusive label with the rest of the behaviours: specifically, if the mouse is Not Observable, we know it is doing exactly one of the other behaviours (even if we cannot be sure about which).
In the next subsections we describe in turn the OC and BC sub-modules: note that we postpone detailed training and experimental evidence for the choice of the architectures to our Experiments Sec. 5.

Determining Observability
For the OC (iv in Fig. 2) we use as features: the position of the mouse (RFID), the fraction of frames (within the BTI) in which a BBox for the mouse appears, the average area of such BBoxes and finally, the first 30 Principal Component Analysis (PCA) components from the feature-vector obtained by applying the Long-term Feature Bank (LFB) model [60] to the video.These are fed to a logistic-regression classifier trained using the binary cross-entropy loss [8,206] with l 2 regularisation, weighted by inverse class frequency (to address class imbalance).We judiciously choose the operating point (see Sec. 5.1.2) to balance the errors the system makes.Further details regarding the choice and training of the classifier appear in Sec.5.1.2.

Probability over Behaviours
The BC (v in Fig. 2) operates only on samples deemed Observable by the OC, outputting a probability distribution over the seven behaviour labels (Sec.3.3).The core component of the BC is the LFB architecture of Wu et al. [60] which serves as the backbone activity classifier.For each BTI, the centre frame and six others on either side at a stride of eight are combined with the first detection of the mouse in the same period and fed to the LFB classifier.The logit outputs of the LFB are then calibrated using temperature scaling [27], yielding a seven-way probability vector.In instances where there is no detection for the BTI, a default distribution is output instead.All components of the BC (including choice of backbone architecture) were finetuned on our data as discussed in Sec.5.1.3.
Although the identification of key-points on a mouse is a sensible way to extract pose information in a clean environment with a top-mounted camera, it is much more problematic in our cluttered home-cage environment with a side-mounted camera.Indeed, attempts to use the popular DeepLabCut framework [33] failed because of the lack of reproducible key points (see previous work in Camilleri, Zhang, Bains, Zisserman, and Williams 13, sec.5.2).Hence we predict the behaviour with the BC directly from the RFID data, BBoxes and frames (as illustrated in Fig. 2), without this intermediate step.
Figure 3: Graphical representation of our GBM.'×' refers to standard matrix multiplication.To reduce clutter, the model is not shown unrolled in time.

Modelling Behaviour Dynamics
In modelling behaviour, we seek to: (a) capture the temporal aspect of the individual's behaviour, and (b) model the interaction and interdependence between cage-mates.These goals can be met through fitting a HMM on a per-cage basis, in which the behaviour of each mouse is represented by factorised categorical emissions contingent on a latent 'regime' (which couples them together).However, this generates a lot of models, making it hard to analyse and compare observations across cages.
To address this, we seek to fit one GBM across cages.The key problem is that the assignment of mouse identities in a cage (denoted as R, G, B) is arbitrary.As an example, if R represents a dominant mouse in one cage, this role may be taken by e.g.mouse G in another cage 2 .Forcing the same emission probabilities across mice avoids this problem, but is too restrictive of the dynamics that can be modelled.Instead, we introduce a permutation matrix to match the mice in any given cage to the GBM as shown in Fig. 3.This formulation is broadly applicable to scenarios in which one seeks to uncover shared behaviour dynamics across different entities (e.g. in the analysis of sports plays).
As in a HMM, there is a latent state Z indexed by cage m, recording-run n and time t, forming a Markov chain (over t), which represents the state of the cage as a whole.This 'regime', is parametrised by π in the first time-point (initial probability) as well as Ω (transition probabilities), and models dependence both in time as well as between individuals.
We then use X to denote the behaviour of each mouse: this is a vector of variables, one for each mouse k ∈ {1, . . ., K}, in which the order follows a 'canonical' assignment.Note that each mouse is represented by a complete categorical probability distribution (as output from the ALM), rather than a hard label, and is conditioned on Z through the emission probabilities Ψ.This allows us to propagate uncertainty in the outputs of the ALM module, with the error implicitly captured through Ψ.
For each cage m, the random variable Q [m] governs which mouse, k (R/G/B) is assigned to which index, k, in the canonical representation X, and is fixed for all samples n, t and behaviours x.The sample space of Q consists of all possible permutation matrices of size K × K i.e. matrices whose entries are 0/1 such that there is only one 'on' cell per for all cages m ∈ M do 3: Compute X [m] given q[m] ▷ Eq. (A.1) ).This is because the mouse identities have already been established though time in the TIM, and it is only the permutation of roles between cages that needs to be considered.The GBM is a novel combination of a HMM to model the interdependence between cage-mates, and the use of the permutation matrix to handle the mapping between the model's canonical representation X and the observed X.
Note that fixing Q and X determines X completely by simple linear algebra.This allows us to write out the complete data likelihood as: The parameters Θ = {π, Ω, ξ, Ψ} of the model are inferred through the EM algorithm [40] as shown in Algorithm 1 and detailed in the appendix.We seed the parameter set using a model fit to one cage, and subsequently iterate between optimising Q (per-cage) and optimising the remaining parameters using standard EM on the data from all cages.This procedure is carried out using models initialised by fitting to each cage in turn, and then the final model is selected based on the highest likelihood score (much like with multiple random restarts).Furthermore, we use the fact that the posterior over Q is highly peaked, to replace the expectation over Q by its maximum (a point estimate), thereby greatly reducing the computational complexity.

Experiments
We report two sets of experiments.We begin in Sec.5.1 by describing the optimisation of the various modules that make up the ALM, and subsequently, describe the analysis of the group behaviour in Sec.5.2.The code to produce these results is available at https://github.com/michael-camilleri/Mice-N-Mates.

Fine-tuning the ALM
The ALM was fit and evaluated on the ABODe dataset.

Metrics
For both the observability and behaviour components of the ALM we report accuracy and F 1 score [see e.g.43, Sec.5.7.2.3].We use the macro-averaged F 1 to better account for the class imbalance.This is particularly severe for the observability classification, in which only about 7% of samples are Not Observable, but it is paramount to flag these correctly.Recall that the Observable samples will be used to infer behaviour (Sec.4.1.2) which is in turn used to characterise the dynamics of the mice (Sec.4.2).Hence, it is more detrimental to give False Positive (FP) outputs, which results in Unreliable behaviour classifications (i.e. when the sample is Not Observable but the OC deems it to be Observable, which can throw the statistics awry) than missing some Observable periods through False Negatives (FNs) (which, though Wasteful of data, can generally be smoothed out by the temporal model).This construct is formalised in Tab. 1, where we use the terms Unreliable and Wasteful as they better illustrate the repercussions of the errors.In our evaluation, we report the number of Unreliable and Wasteful samples to take this imbalance into account.
For the BC, we also report the normalised (per-sample) log-likelihood score, L, given that we use it as a probabilistic classifier.4), brings out two clear candidate models.Note how at most operating points, the LgR model is the best classifier, except for some ranges where NB is better (higher).These were subsequently compared in terms of the number of Unreliable and Wasteful samples at two thresholds: one is at the point at which the number of Wasteful samples is on par with the true number of Not Observable in the data (i.e.8%), and the other at which the number of predicted Not Observable equals the statistic in the ground-truth data.These appear in Tab.2: the LgR outperforms the NB in almost all cases, and hence we chose the LgR classifier operating at the Wasteful = 8% point.

Behaviour
We explored two architectures as backbones (feature extractors) for the BC, the Spatio-Temporal Localisation Transformer (STLT) of [49] and the LFB of [60], on the basis of them being most applicable to the spatio-temporal action-localisation problem [15].In both cases, we: (a) used pre-trained models and fine-tuned them on our data, (b) standardised the pixel intensity (over all the images) to unit mean and one standard deviation (fit on samples from the tuning split), and (c) investigated lighting enhancement techniques [35], although this did not improve results in any of our experiments.We provide an overview of the training below, but the interested reader is directed to Camilleri [12] for further details.
The STLT uses the relative geometry of BBoxes in a scene (layout branch), as well as the video frames (visual branch) to classify behaviour [49].The visual branch extracts per-frame features using a ResNet-50, The layout branch is a transformer architecture which considers the temporal and spatial arrangement of detections of the subject animal and contextual objects -the other cage-mates and the hopper.In order to encode invariance to the absolute mouse identities, the assignment of the cage-mates to the slots 'cagemate1' and 'cagemate2' was randomly permuted during training.The signal from each branch is fused at the last stages of the classifier.The base architecture was extended to use information from outwith the BTI, drawing on temporal context from surrounding time-points.We ran experiments using the layout-branch only and the combined layout/visual branches, each utilising different number of frames and   strides.We also experimented with focusing the visual field on the detected mouse alone.Training was done using Adam [31], with random-crop and colour-jitter augmentations, and we explored various batch-sizes and learning rates: validation-set evaluation during training directed us to pick the dual-branch model with 25 frames (12 on either side of the centre frame) at a stride of 2 as the best performer.
The LFB [60], on the other hand, is a dedicated spatio-temporal action localisation architecture which combines BTI-specific features with a long-term feature-bank extracted from the entire video, and joined together through an attention mechanism before being passed to a linear classification layer.Each of the two branches (feature-extractors) uses a FastRCNN network with a ResNet 50 backbone.We used the pre-trained feature-bank generator and fine-tuned the short-term branch, attention mechanism and classification layer end-to-end on our data.Training was done using Stochastic Gradient Descent (SGD) with a fixed-step learning scheduler: we used batches of 16 samples and trained for 50 epochs, varying learning rates, warm-up periods and crucially the frame sampling procedure, again in terms of total number of frames and the stride.We explored two augmentation procedures (as suggested by Wu et al.To choose our backbone architecture, the best performer in each case was evaluated in terms of Accuracy, F 1 and log-likelihood on both the Training and Validation set in Tab. 3. Given the validation set scores, the LFB model with an F 1 of 0.61 (compared to 0.36 for the STLT), was chosen as the BC.This was achieved despite the coarser layout information available to the LFB and the changes to the STLT architecture: we hypothesize that this is due to the ability of the LFB to draw on longer-term context from the whole video (as opposed to the few seconds available for the STLT).
The LFB (and even the STLT) model can only operate on samples for which there is a BBox for the mouse.We need to contend however with instances in which the mouse is not identified by the TIM, but the OC reports that it should be Observable.In this case, we fit a fixed categorical distribution to samples which exhibited this 'error' in the training data (i.e.Observable but no BBox).

End to End performance
In Table 4 we show the performance of the ALM on the held-out test-set.Since there is no other system that works on our data to compare against, we report the performance of a baseline classifier which returns the prior probabilities.
In terms of observability, the ALM achieves slightly less accuracy but a much higher F 1 score, as it seeks to balance the types of errors (cutting the Unreliable by 34%).In terms of behaviour, when considering only Observable classifications, the system achieves 68% accuracy and 0.54 F 1 despite the high class imbalance.The main culprits for the low score are the grooming behaviours, which as shown in Fig. 5, are often confused for Immobile.Within the supplementary material, we provide a demo video -Online Resource 1 -showing the output from the ALM for a sample 1:00 clip.In the clip, the mice exhibit a range of behaviours, including Feeding, Locomotion, Self-Grooming, and Immobile.

Group Behaviour Analysis
The IMADGE dataset is used for our behaviour analysis, focusing on the adult demographic and comparing with the young one later.

Overall Statistics
It is instructive to look at the overall statistics of behaviours.In Tab. 5 we report the percentage of time mice exhibit a particular behaviour, averaged over cages and split by age-group.The Immobile behaviour clearly stands out as the most prevalent, but there is a marked increase as the mice get older (from 30% to 45%) -this is balanced by a decrease in Other, with most other behaviours exhibiting much the same statistics.

Metrics
Evaluating an unsupervised model like the GBM is not straightforward, since there is no objective ground-truth for the latent states.Instead, we compare models using the normalised log-likelihood L. When reporting relative changes in L, we use a baseline model to set an artificial zero (otherwise the log-likelihood is not bounded from below).Let L BL represent the normalised log-likelihood of a baseline model: the independent distribution per mouse per frame.Subsequently, we use L Θ for the likelihood under the global model (parametrised by Θ) and L * Θ for the likelihood under the per-cage model (Θ * ).We can then define the Relative Difference in Log-Likelihood (RDL) between the two models parametrised as: In evaluating on held-out data, we have a further complexity due to the temporal nature of the process.Specifically, each sample cannot be considered independent with respect to its neighbours.Instead, we treat each run (2 1 /2 hours) as a single fold, and evaluate models using leave-one-out cross-validation: i.e. we train on five of the folds and evaluate on the held-out in turn.

Size of Z
The number of latent states |Z| in the GBM governs the expressivity of the model: too small and it is unable to capture all the dynamics, but too large and it becomes harder to interpret.To this end, we fit a per-cage model (i.e.without the Q construct) to the adult mice data for varying |Z| ∈ {2, . . ., 13}, and computed L on held out data (using the aforementioned cross-validation approach).As shown in Fig. 6, the likelihood increased gradually, but slowed down beyond |Z| = 7: we thus use |Z| = 7 in our analysis.

Peaked Posterior over Q
Our Algorithm 1 assumes that the posterior over Q is sufficiently peaked.To verify this, we computed the posterior for all permutations over all cages given each per-cage model.To two decimal places, the posterior is deterministic as shown in Fig. 7 for the cages in the Adult demographic using the model trained on cage L. The Young demographic exhibited the same phenomenon.

Quality of Fit
During training of per-cage models, we noticed extremely similar dynamics between the cages (see Camilleri 12).This is not unexpected given that the mice are of the same strain and sex, and recorded at the same age-point.Nonetheless, we wished to investigate the penalty paid by using a global rather than per-cage model.To this end, in Tab.6, we report the L and RDL (Eq.( 2)) of the GBM model evaluated on data from each cage in turn.Although the per-cage model is understandably better than the GBM on its own data, the average drop is just 4.8%, which is a reasonable penalty to pay in exchange for a global model.Plotting the same values on the number-line (Fig. 8) shows two cages, D and F, that stand out from the rest due to a relatively higher drop.This led us to further investigate the two cages as potential outliers in our analysis, see Sec. 5.2.6.For Ω (leftmost panel) we show the transition probabilities: underneath the Z [t+1] labels, we also report the steady-state probabilities (first row) and the expected dwell times (in BTIs, second row).The other three panels show the emission probabilities Ψ k for each mouse as Hinton plots.We omit zeros before the decimal point and suppress values close to 0 (at the chosen precision).

Latent Space Analysis
Figure 9 shows the parameters of the trained GBM.Most regimes have long dwell times, as indicated by the values close to 1 on the diagonal of Ω.For the emission matrices Ψ 1 , . . ., Ψ 3 , note that regime F captures the Immobile behaviour for all mice, and is the most prevalent (0.26 steady state probability) -it also provides evidence for the anecdotal phenomenon that mice tend to huddle together to sleep.The purity of this regime indicates that the mice often are Immobile at the same time, re-enforcing the biological knowledge that they tend to huddle together for sleeping, but it is interesting that this was picked up by the model without any apriori bias.This is further evidenced in the ethogram visualisation in Fig. 10, which also points out regime C as indicating when any two mice are Immobile, and D for any two mice exhibiting Self-Grooming.Similarly, regime A is most closely associated with the Other label, although it is less pure.
A point of interest are the regimes associated with the Feeding behaviour, that are different across mice -B, E and G for mice 1, 2 and 3 respectively.This is surprising given that more than one mouse can feed at a time (the design of the hopper is such that there is no need for competition for feeding resources).This is significant, given that it is a global phenomenon, as it could be indicative of a pecking order in the cage.Another aspect that emerges is the co-occurrence of Self-Grooming with Immobile or Other behaviours: note how in regime (D) (which has the highest probability of Self-Grooming) these are the most prevalent.
Our Quality-of-Fit analysis (Sec.5.2.5) highlighted cages D and F as exhibiting deviations in the behaviour dynamics, compared to the rest of the cages.To investigate these further, we plotted the parameters of the per-cage model for these two cases in Fig. 11, and compare them against an 'inlier' cage L. Note that both the latent states and the mouse identities are only identifiable subject to a permutation (this was indeed the need for the GBM construct).To make comparison easier, we optimised the permutations of both the latent states and the ordering of mice that (on a per-cage basis) maximise the agreement with the global model.The emission dynamics (Ψ) for the 'outliers' are markedly different from the global model.Note for example how the feeding dynamics for cages D and F do not exhibit the same pattern as in the GBM and cage L: in both these cases the same feeding regime G is shared by mice 1 and 3.In the case of cage D, there is also evidence that a 6-regime model suffices to explain the dynamics (note how regimes B and E are duplicates with high switching frequency between the two).

Anomaly Detection
We used the model trained on our 'normal' demographic to analyse data from 'other' cages: i.e. anomaly detection.This capacity is useful e.g. to identify unhealthy mice, strain-related differences, or, as in our proof of concept, evolution of behaviour through age.In Fig. 12 we show the trained GBM evaluated on data from both the adult (blue) and young (orange) demographics in IMADGE.Apart from two instances, the L is consistently lower in the younger group compared to the adult demographic: moreover, for all cages where we have data in both age groups, L is always lower for the young mice.Indeed, a binary threshold achieves 90% accuracy when optimised and a T-test on the two subsets indicates significant differences (p-value = 1.1 × 10 −4 ).Given that we used mice from the same strain (indeed even from the same cages), the video recordings are very similar: consequently we expect the ALM to have similar performance on the younger demographic, suggesting that the differences arise from the behaviour dynamics.

Analysis of Young mice
Training the model from scratch on the young demographic brings up interesting different patterns.Firstly, the |Z| = 6 model emerged as a clear plateau this time, as shown in Fig. 13. Figure 14 shows the parameters for the GBM with |Z| = 6 after optimisation on the young subset.It is noteworthy that the Immobile state is less pronounced (in regime D), which is consistent with the younger mice being more active.Interestingly, while there is a regime associated with Feeding, it is the same for all mice and also much less pronounced: recall that for the adults, the probability of feeding was 0.7 in each of the Feeding regimes.This could indicate that the pecking order, at least at the level of feeding, develops with age.

Discussion
In this paper we have provided a set of tools for biologists to analyse the individual behaviours of group housed mice over extended periods of time.Our main contribution was the novel GBM -a HMM equipped with a permutation matrix for identity matching -to analyse the joint behaviour dynamics across different cages.This evidenced interesting dominance relationships, and also flagged significant deviations in an alternative young age group.In support of the above, we released two datasets, ABODe for training behaviour classifiers and IMADGE for modelling group dynamics (upon which our modelling is based).ABODe was used to develop and evaluate our proposed ALM that automatically classifies seven behaviours despite clutter and occlusion.
Since our end-goal was to get a working pipeline to allow us to model the mouse behaviour, the tuning of the ALM leaves room for further exploration, especially as regards architectures for the BC.In future work we would like to analyse other mouse demographics.Much of the pipeline should work "out of the box", but to handle mice of different colours to those in the current dataset it may be necessary to annotate more data for the ALM and for the TIM.
Some contemporary behavioural systems use pose information to determine behaviour.Note that the STLT explicitly uses BBox poses within the attention architecture of the layout branch: nonetheless, the model was inferior to the   Figure 14: GBM parameters on the Young mice data for |Z| = 6.Arrangement is as in Fig. 9.

A.5.3 Maximising for Ω
As always, this is a constrained optimisation by virtue of the need for valid probabilities.We start from the Lagrangian: which after incorporating into the previous equation gives the maximum-a-posteriori update:

B Labelling Behaviour in ABODe
A significant effort in the curation of the ABODe data was the annotation of the behaviours.This involved development of a well-defined annotation schema, and a rigorous annotation process, building upon the expertise of the animal care technicians at MLC at MRC Harwell and our own experience in annotation processes.

B.1 Annotation Schema
Labels are specified per-mouse per-BTI, focusing on both the behaviour and also an indication of whether it is observable.

B.1.1 Behaviour
The schema admits nine behaviours and three other labels, as shown in Tab.B.1.In particular, labels Hidden, Unidentifiable, Tentative and Other ensure that the annotator can specify a label in every instance, and clarify the source of any ambiguity.In this way, the labels are mutually exclusive and exhaustive.This supports our desire to ensure that each BTI is given exactly one behaviour label and eliminates ambiguity about the intention of the annotator.

B.1.2 Observability
The Hidden label, while treated as mutually exclusive with respect to the other behaviours for the purpose of annotation, actually represents a hierarchical label space.Technically, Hidden mice are doing any of the other behaviours, but we cannot tell which -any subsequent modelling might benefit from treating these differently.We thus sought to further specify the observability of the mice as a label-space in its own right as shown in Tab.B.2.

B.2 Annotation Process
The annotations were carried out using the BORIS software [22]: this was chosen for its versatility, familiarity to the authors and open-source implementation.

B.2.1 Recruitment
Given the resource constraints in the project, we were only able to recruit a single expert animal care technician (henceforth the phenotyper) to do our annotations.This limited the scale of our dataset but on the other hand simplified the data curation process and ensured consistency throughout the dataset.In particular, it should be noted that given the difficulty of the task (which requires very specific expertise), the annotation cannot be crowdsourced.Camilleri Mouse is fully (or almost fully) occluded and barely visible.Note that while the annotator may have their own intuition of what the mouse is doing (because they saw it before) they should still annotate as Hidden.

Unidentifiable [N/ID]
Annotator cannot identify the mouse with certainty.Typically, there is at least another mouse which has an Unidentifiable flag.

Immobile [Imm]
Mouse is not moving and static (apart from breathing), which may or may not be sleeping.

Feeding [Feed]
The mouse is eating, typically with its mouth/head in the hopper: it may also be eating from the ground.

Drinking [Drink]
Drinking from the water spout.

Allo-Grooming [A-Grm]
Mouse is grooming another cage member.In this case, the annotator must indicate the recipient of the grooming through the Modifier field.

Climbing [Climb]
All feet off the floor and also NOT on the tunnel, with the nose outside the food hopper if it is using it for support (i.e. it should not be eating as well).

Micro-motion [uMove]
A general motion activity while staying in the same place.This could include for example sniffing/looking around/rearing.

Tentative [Tent]
The mouse is exhibiting one of the behaviours in the schema, but the annotator is uncertain of which.If possible, the subset of behaviours that are tentative should be specified as a Modifier.In general, this is an indication that the behaviour needs to be evaluated by another annotator.

Other [Other]
Mouse is doing something which is not accounted for in this schema.Certainly, aggressive behaviour will fall here, but there may be other behaviours we have not considered.Mouse is visible (or enough to distinguish between some behaviours).Note that it may still be difficult to identify with certainty what the mouse is doing but at a minimum can differentiate between 'Immobile' and other behaviours.

Ambiguous [Amb]
All other cases, especially when it is borderline or when mouse cannot be identified with certainty.
[12, Sec.3.1.4]documents attempts to use behaviour annotations obtained through an online crowdsourcing platform: initial in-depth analysis of the data indicated that the labellings were not of sufficiently high quality as to be reliable for training our models.To mitigate the potential shortcomings of a single annotator, we: (a) carried out a short training phase for the phenotyper with a set of clips that were simultaneously annotated by the phenotyper and ourselves, (b) designed some automated sanity checks to be run on annotations (see below), and, (c) re-annotated the observability labels ourselves.

B.2.2 Method
The phenotyper was provided with the 200 two-minute clips, grouped in batches of 20 clips each (10 batches in total, 400 minutes of annotations).The clips were randomised and stratified such that in each batch there are 10 clips from the training split, 4 clips from the validation split and 6 from the testing split.This procedure was followed to reduce the effect of any shift in the annotation quality (as the phenotyper saw more of the data) on the datasplits.
For each clip, the phenotyper had access to the CLAHE-processed video and the RFID-based position information.We made the conscious decision to not use the BBox localisation from the TIM since this allows us to (a) decouple the behaviour annotations from the performance of upstream components, and (b) provides a more realistic estimate of end-to-end performance.
Although behaviour is defined per BTI, annotating in this manner is not efficient for humans: instead, the annotator was tasked with specifying intervals of the specific behaviour, defined by the start and end-point respectively.This is also the motivation behind limiting clips to two-minute snippets: these are long enough that they encompass several behaviours but are more manageable than the 30-minute segments, and also provide more variability (as it allows us to sample from more cages).

B.2.3 Quality Control
To train the phenotyper, we provided a batch of four (manually chosen) snippets, which were also annotated by ourselves -this enabled the phenotyper to be inducted into using BORIS and in navigating the annotation schema, providing feedback as required.Following the annotation of each production batch, we also ran the labellings through a set of automated checks which guarded against some common errors.These were reported back to the phenotyper, although they had very limited time to act on the feedback which impacted on the resulting data quality.
The main data quality issue related to the misinterpretation of Hidden by the phenotyper, leading to over-use of the Hidden label.To rectify this, we undertook to re-visit the samples labelled as Hidden and clarify the observability as per the schema in Tab.B.2.Samples which the phenotyper had labelled as anything other than Hidden (except for Unidentifiable samples which were ignored as ambiguous) were retained as Observable-we have no reason to believe that the phenotyper reported a behaviour when it should have been Hidden.The only exception was when there was a clear misidentification of the mice, which was rectified (we had access to the entire segment which provided longer-term identity cues for ambiguous conditions).Note that our annotation relates to the observability (or otherwise): however, when converting a previously Hidden sample to Observable, we provided a "best-guess" annotation of behaviour.These best-guess annotations are clearly marked, allowing us to defer to the superior expertise of the phenotyper in differentiating between actual behaviours where this is critical (e.g. for training models).

B.2.4 Statistics
We end this section by reporting dataset statistics.ABODe consists of 200 snippets, each two-minutes in length.These were split (across recording boundaries) into a Training, Validation and Test set.Table B.3 shows the number of admissible mouse-BTIs that are annotated observable/hidden and the distribution of behaviours for observable samples.

Figure 4 :
Figure 4: ROC curves for various architectures of the OC evaluated on the Validation split.Each coloured line shows the TP rate against the FP rate for various operating thresholds: the 'default' 0.5 threshold in each case is marked with a cross '×'.The baseline (worst-case) model is shown as a dotted line.
60): (a) random rescaling and cropping, and (b) colour jitter (based only on brightness and contrast).Again, validation-set scores allowed us to choose the 11-frame model with a stride of 8 and with resize-and-crop and colour-jitter augmentations (at the best performing epoch) as our LFB contender.

Figure 5 :
Figure 5: Behaviour confusion matrix of the End-to-End model as a Hinton plot.The area of each square represents the numerical value, and each row is normalized to sum to 1.

Figure 6 :
Figure 6: Normalised log-likelihood ( L) of the GBM for various dimensionalities of the latent state over all cages.

Figure 7 :
Figure 7: Posterior probability of the GBM over Q for all cages (|Z| = 7, model trained on cage L).

Figure 8 :
Figure 8: RDLs from Tab. 6, printed on the number line: the lowest scoring cages are marked.

Figure 9 :
Figure 9: Parameters for the GBM with |Z| = 7 trained on Adult mice.For Ω (leftmost panel) we show the transition probabilities: underneath the Z [t+1] labels, we also report the steady-state probabilities (first row) and the expected dwell times (in BTIs, second row).The other three panels show the emission probabilities Ψ k for each mouse as Hinton plots.We omit zeros before the decimal point and suppress values close to 0 (at the chosen precision).

Figure 10 :
Figure 10: Ethogram for the regime (GBM |Z| = 7) and individual behaviour probabilities for a run from cage B.In all but the light status, darker colours signify higher probability: the hue is purple for Z and matches the assignment of mice to variables X k otherwise.The light-status is indicated by white for lights-on and black for lights-off.Missing data is indicated by grey bars.

Figure 11 :
Figure 11: Parameters for the per-cage models (|Z| = 7) for cages D, F and L. The order of the latent states is permuted to maximise the similarity with the global model (using the Hungarian algorithm) for easier comparison.The plot follows the arrangement in Fig. 9.

Figure 12 :
Figure 12: L scores (x-axis) of the GBM on each cage (y-axis, left) in the adult/young age groups, together with the accuracy of a binary threshold on the L (scale on the right).
Algorithm 1 Modified Expectation Maximisation (EM) for GBM.Equations appear in the appendix.
[45]n therefore take on one of K! distinct values (permutations).This permutation matrix setup has been used previously, e.g. for problems of data association in multi-target tracking (see, e.g.,Murphy 42, sec.29.9.3), and in static matching problems, see e.g., Mena et al.[41], Powell and Smith[47]and Nazabal et al.[45].In the above cases the focus is on approximate matching due to a combinatorial explosion, but here we are able to use exact inference due to the low dimensionality (in our case |Q| = 3! = 6

Table 1 :
Definition of classification outcomes for the Observability problem.GT refers to the ground-truth (annotated) and the standard machine learning terms -True Positive (TP), FP, True Negative (TN), FN -are in square brackets.
The challenge in classifying observability was to handle the severe class imbalance, which implied judicious feature selection and classifier tuning.Although the observability sample count is high within ABODe, the skewed nature (with only 7% Not Observable) is prone to overfitting.Features were selected based on their correlation with the observability flag, and narrowed down to the subset already listed (Sec.4.1.1).As for classifiers, we explored Logistic Regression (LgR), Naïve Bayes (NB), Random Forests (RF), Support-Vector Machines (SVM) and feed-forward Neural Networks (NN).Visualising the ROC curves [see e.g.43, Sec.5.7.2.1] (Fig.

Table 2 :
Comparison of LgR and NB as observability classifiers (on the validation set) at different operating points.The best performing score in each category appears in bold.Note that for context, there are 10,124 samples, of which 750 are Not Observable.

Table 3 :
Evaluation of the baseline (prior-probability), STLT and LFB models on the Training and Validation sets in terms of Accuracy, macro-F 1 and normalised log-likelihood ( L).The best performing score in each category (on the validation set) appears in bold.

Table 4 :
Test performance of the ALM and baseline model, in terms of observability and behaviour.The best performing score in each category appears in bold.Within the former, U and W refer to the counts of Unreliable and Wasteful respectively: the dataset contains 20,581 samples.

Table 5 :
Distribution of Behaviours across cages per age-group.

Table 6 :
Evaluation of the GBM model (|Z| = 7) on data from each cage (columns) in terms of the Normalised log-likelihood ( L) and RDL.

Table B .
1: Definition of Behaviour Labels: these are listed in order of precedence (most important ones first).The short-hand label (used in figures/tables) is in square brackets.

Table B .
2: Definitions of Observability labels (shorthand label in square brackets) Mouse is fully (or almost) occluded and not enough information to give any behaviour.When mice are huddling (and clearly immobile), a mouse is still considered Not Observable if none of it is visible.

Table B .
3: Statistics for ABODe, showing the number of samples (BTIs-mouse pairs) for each behaviour and data partition.