A Survey of Deep Meta-Learning

Deep neural networks can achieve great successes when presented with large data sets and sufficient computational resources. However, their ability to learn new concepts quickly is quite limited. Meta-learning is one approach to address this issue, by enabling the network to learn how to learn. The exciting field of Deep Meta-Learning advances at great speed, but lacks a unified, insightful overview of current techniques. This work presents just that. After providing the reader with a theoretical foundation, we investigate and summarize key methods, which are categorized into i) metric-, ii) model-, and iii) optimization-based techniques. In addition, we identify the main open challenges, such as performance evaluations on heterogeneous benchmarks, and reduction of the computational costs of meta-learning.


Introduction
In recent years, deep learning techniques have achieved remarkable successes on various tasks, including game-playing (Mnih et al., 2013;Silver et al., 2016), image recognition (Krizhevsky et al., 2012;He et al., 2015), and machine translation (Wu et al., 2016). Despite these advances, ample challenges remain to be solved, such as the large amounts of data and training that are needed to achieve good performance. These requirements severely constrain the ability of deep neural networks to learn new concepts quickly, one of the defining aspects of human intelligence (Jankowski et al., 2011;Lake et al., 2017).
Meta-learning has been suggested as one strategy to overcome this challenge (Naik and Mammone, 1992;Schmidhuber, 1987;Thrun, 1998). The key idea is that meta-learning agents improve their own learning ability over time, or equivalently, learn to learn. The learning process is primarily concerned with tasks (set of observations) and takes place at two different levels: an inner-and an outer-level. At the inner-level, a new task is presented, and the agent tries to quickly learn the associated concepts from the training observations. This quick adaptation is facilitated by knowledge that it has accumulated across earlier tasks at the outer-level. Thus, whereas the inner-level concerns a single task, the outer-level concerns a multitude of tasks.
Historically, the term meta-learning has been used with various scopes. In its broadest sense, it encapsulates all systems that leverage prior learning experience in order to learn new arXiv:2010.03522v1 [cs.LG] 7 Oct 2020 tasks more quickly (Vanschoren, 2018). This broad notion includes more traditional algorithm selection and hyperparameter optimization techniques for Machine Learning (Brazdil et al., 2008). In this work, however, we focus on a subset of the meta-learning field which develops meta-learning procedures to learn a good inductive bias for (deep) neural networks. 1 Henceforth, we use the term Deep Meta-Learning to refer to this subfield of meta-learning.
The field of Deep Meta-Learning is advancing at a quick pace, while it lacks a coherent, unifying overview, providing detailed insights into the key techniques. Vanschoren (2018) has surveyed meta-learning techniques, where meta-learning was used in the broad sense, limiting its account of Deep Meta-Learning techniques. Also, many exciting developments in deep meta-learning have happened after the survey was published. A more recent survey by Hospedales et al. (2020) adopts the same notion of deep meta-learning as we do, but aims for a broad overview, omitting technical details of the various techniques.
We attempt to fill this gap by providing detailed explications of contemporary Deep Meta-Learning techniques, using a unified notation. In addition, we identify current challenges and directions for future work. More specifically, we cover modern techniques in the field for supervised and reinforcement learning, that have achieved state-of-the-art performance, obtained popularity in the field, and presented novel ideas. Extra attention is paid to MAML (Finn et al., 2017), and related techniques, because of their impact on the field. This work can serve as educational introduction to the field of Deep Meta-Learning, and as reference material for experienced researchers in the field. Throughout, we will adopt the taxonomy used by Vinyals (2017), which identifies three categories of Deep Meta-Learning approaches: i) metric-, ii) model-, and iii) optimization-based meta-learning techniques.
The remainder of this work is structured as follows. Section 2 builds a common foundation on which we will base our overview of Deep Meta-Learning techniques. Section 3, 4, and 5 cover the main metric-, model-, and optimization-based meta-learning techniques, respectively. Section 6 provides a helicopter view of the field, and summarizes the key challenges and open questions. Table 1 gives an overview of notation that we will use throughout this paper.

Foundation
In this section, we build the necessary foundation for investigating Deep Meta-Learning techniques in a consistent manner. To begin with, we contrast regular learning and metalearning. Afterwards, we briefly discuss how Deep Meta-Learning relates to different fields, what the usual training and evaluation procedure looks like, and which benchmarks are often used for this purpose. We finish this section by describing some applications and context of the meta-learning field.

The Meta Abstraction
In this subsection, we contrast base-level (regular) learning and meta-learning for two different paradigms, i.e., supervised and reinforcement learning.

Expression Meaning
Meta-learning Learning meta-knowledge that can be used to learn new tasks more quckly T j = (D tr T j , D test T j ) A task consisting of a labeled train and test set Support set The train set D tr T j associated with a task T j Query set The test set D test T j associated with a task T j x i Example input vector i in the support set y i (One-hot encoded) label of example input x i from the support set x Input in the query set y A (one-hot encoded) label for input x (f /g/h) • Neural network function with parameters • Inner-level At the level of a single task Outer-level At meta-level: across tasks Fast weights Parameters that were generated for a specific task/example Base-learner Learner that works at the inner-level Meta-learner Learner that operates at the outer-level Input embedding Activation pattern in the final layer of a neural network caused by the input Task embedding An internal representation of a task in a network/system SL Supervised Learning RL Reinforcement Learning Table 1: Some notation and meaning, which we use throughout this paper.

Regular Supervised Learning
In supervised learning, we wish to learn a function f θ : X → Y that learns to map inputs x i ∈ X to their corresponding outputs y i ∈ Y . Here, θ are model parameters (e.g. weights in a neural network) that determine the function's behavior. To learn these parameters, we are given a data set of m observations: D = {(x i , y i )} m i=1 . Thus, given a data set D, learning boils down to finding the correct setting for θ that minimizes an empirical loss function L D , which must capture how the model is performing, such that appropriate adjustments to its parameters can be made. In short, we wish to find where SL stands for "supervised learning". Note that this objective is specific to data set D, meaning that our model f θ may not generalize to examples outside of D. To measure generalization, one could evaluate the performance on a separate test data set, which contains unseen examples. A popular way to do this is through cross-validation, where one repeatedly creates train and test splits D tr , D test ⊂ D and uses these to train and evaluate a model respectively (Hastie et al., 2009). Finding globally optimal parameters θ SL is often computationally infeasible. We can, however, approximate them, guided by pre-defined meta-knowledge ω (Hospedales et al., 2020), which includes, e.g., the initial model parameters θ, choice of optimizer, and learning rate schedule. As such, we approximate where g ω is an optimization procedure that uses pre-defined meta-knowledge ω, data set D, and loss function L D , to produce updated weights g ω (D, L D ) that (presumably) perform well on D.

Supervised Meta-Learning
In contrast, supervised meta-learning does not assume that any meta-knowledge ω is given, or pre-defined. Instead, the goal of meta-learning is to find the best ω, such that our (regular) base-learner can learn new tasks (data sets) as quickly as possible. Thus, whereas supervised regular learning involves one data set, supervised meta-learning involves a group of data sets. The goal is to learn meta-knowledge ω such that our model can learn many different tasks well. Thus, our model is learning to learn. More formally, we have a probability distribution of tasks p(T ), and wish to find optimal meta-knowledge ( Here, the inner-level concerns task-specific learning, while the outer-level concerns multiple tasks. One can now easily see why this is meta-learning: we learn ω, which allows for quick learning of tasks T j at the inner-level. Hence, we are learning to learn.

Regular Reinforcement Learning
In reinforcement learning, we have an agent that learns from experience. That is, it interacts with an environment, modeled by a Markov Decision Process (MDP) M = (S, A, P, r, p 0 , γ, T ).
Here, S is the set of states, A the set of actions, P the transition probability distribution defining P (s t+1 |s t , a t ), r : S × A → R the reward function, p 0 the probability distribution over initial states, γ ∈ [0, 1] the discount factor, and T the time horizon (maximum number of time steps) (Sutton and Barto, 2018;Duan et al., 2016). At every time step t, the agent finds itself in state s t , in which the agent performs an action a t , computed by a policy function π θ (i.e., a t = π θ (s t )), which is parameterized by weights θ. In turn, it receives a reward r t = r(s t , π θ (s t )) ∈ R and a new state s t+1 . This process of interactions continues until a termination criterion is met (e.g. fixed time horizon T reached). The goal of the agent is to learn how to act in order to maximize its expected reward. The reinforcement learning (RL) goal is to find θ RL := arg min θ E traj T t=0 γ t r(s t , π θ (s t )), where we take the expectation over the possible trajectories traj = (s 0 , π θ (s 0 ), ...s T , π θ (s T )) due to the random nature of MDPs (Duan et al., 2016). Note that γ is a hyperparameter that can prioritize short-or long-term rewards by decreasing or increasing it, respectively.
Also in case of reinforcement learning it is often infeasible to find the global optimum θ RL , and thus we settle for approximations. In short, given a learning method ω, we approximate where again T j is the given MDP, and g ω is the optimization algorithm, guided by pre-defined meta-knowledge ω.
Note that in a Markov Decision Process (MDP), the agent knows the state at any given time step t. When this is not the case, it becomes a Partially Observable Markov Decision Process (POMDP), where the agent receives only observations O, and uses these to update its belief with regard to the state it is in (Sutton and Barto, 2018).

Meta Reinforcement Learning
The meta abstraction has as its object a group of tasks, or Markov Decision Processes (MDPs) in the case of reinforcement learning. Thus, instead of maximizing the expected reward on a single MDP, the meta reinforcement learning objective is to maximize the expected reward over various MDPs, by learning meta-knowledge ω. Here, the MDPs are sampled from some distribution p(T ). So now, we wish to find a set of parameters

Contrast with other Fields
Now that we have provided a formal basis for our discussion for both supervised and reinforcement meat-learning, it is time to contrast meta-learning briefly with two related areas of machine learning that also have the goal to improve the speed of learning. We will start with transfer learning. Transfer Learning In Transfer Learning, one tries to transfer knowledge of previous tasks to new, unseen tasks (Pan and Yang, 2009;Taylor and Stone, 2009). As such, it subsumes meta-learning, where we attempt to leverage meta-knowledge to learn new tasks more quickly. A key property of meta-learning techniques is their meta-objective, which explicitly aims to optimize performance across a distribution over tasks (as seen in previous sections by taking the expected loss over a distribution of tasks). This objective need not always be present in Transfer Learning techniques, e.g., when one pre-trains a model on a large data set, and fine-tunes the learned weights on a smaller data set.
Multi-task learning An other, closely related field, is that of multi-task learning. In multi-task learning a model is jointly trained to perform well on multiple fixed tasks (Hospedales et al., 2020). Meta-learning, in contrast, aims to find a model that can learn new (previously unseen) tasks quickly. This difference is illustrated in Figure 1. Figure 1: The difference between multi-task learning and meta-learning 2 .

The Meta-Setup
In the previous section, we have described the learning objectives for (meta) supervised and reinforcement learning. We will now describe the general setting that can be used to achieve these objectives. In general, one optimizes a meta-objective by using various tasks, which are data sets in the context of supervised learning, and (Partially Observable) Markov Decision Processes in case of reinforcement learning. This is done in three stages: the i) meta-train stage, ii) meta-validation stage, and iii) meta-test stage, each of which is associated with a set of tasks.
First, in the meta-train stage, the meta-learning algorithm is applied to the meta-train tasks. Second, the meta-validation tasks can then be used to evaluate the performance on unseen tasks, which were not used for training. Effectively, this measures the metageneralization ability of the trained network, which serves as feedback to tune, e.g., hyperparameters of the meta-learning algorithm. Third, the meta-test tasks are used to give a final performance estimate of the meta-learning technique.

N -way, k-shot Learning
A frequently used instantiation of this general meta-setup is called N -way, k-shot classification (see Figure 2). This setup is also divided into the three stages meta-train, metavalidation, and meta-test which are used for meta-learning, meta-learner hyperparameter optimization, and evaluation, respectively. Each stage has a corresponding set of disjoint labels, i.e., L tr , L val , L test ⊂ Y , such that L tr ∩ L val = ∅, L tr ∩ L test = ∅, and L val ∩ L test = ∅. In a given stage s, tasks/episodes T j = (D tr T j , D test T j ) are obtained by sampling examples (x i , y i ) from the full data set D, such that every y i ∈ L s . Note that this requires access to a data set D. Now, the sampling process is guided by the N -way, k-shot principle, which states that every training data set D tr T j should contain exactly N classes and k examples per class, implying that |D tr T j | = N · k. Furthermore, the true labels of examples in the test set D test T j must be present in the train set D tr T j of a given task T j . D tr T j acts as a support set, 2. Adapted from https://meta-world.github.io/ Figure 2: Illustration of N -way, k-shot classification, where N = 5, and k = 1. Metavalidation tasks are not displayed. Adapted from Ravi and Larochelle (2017).
literally supporting classification decisions on the query set D test T j . Throughout this paper, we will use the terms train/support and test/query sets interchangeably. Importantly, note that with this terminology, the test set (or query set) of a task is actually used during the meta-training phase. Furthermore, the fact that the labels across stages are disjoint ensures that we test the ability of a model to learn new concepts.
The meta-learning objective in the training phase is to minimize the loss function of the model predictions on the query sets, conditioned on the support sets. As such, for a given task T j , the model 'sees' the support set, and extracts information from the support set to guide its predictions on the query set. By applying this procedure to different episodes/tasks T j , the model will slowly accumulate meta-knowledge ω, which can ultimately speed up learning on new tasks.
The easiest way to achieve this is by doing this with vanilla neural networks, but as was pointed out by various authors (see, e.g., Finn et al. (2017)) more sophisticated architectures will vastly outperform such network. In the remainder of this work, we will review such architectures.
At the meta-validation and meta-test stages, or evaluation phases, the learned metainformation in ω is fixed. The model is, however, still allowed to make task-specific updates to its parameters θ (which implies that it is learning). After task-specific updates, we can evaluate the performance on the test sets. In this way, we test how well a technique performs at meta-learning.
N -way, k-shot classification is often performed for small values of k (since we want our models to learn new concepts quickly, i.e., from few examples). In that case, one can refer to it as few-shot learning.

Common Benchmarks
Here, we briefly describe some benchmarks that can be used to evaluate meta-learning algorithms.
• Omniglot (Lake et al., 2011): This data set presents an image recognition task.
Each image corresponds to one out of 1 623 characters from 50 different alphabets. Every character was drawn by 20 people. Note that in this case, the characters are the classes/labels.
• ImageNet (Deng et al., 2009): This is the largest image classification data set, containing more than 20K classes and over 14 million colored images. miniImageNet is a mini variant of the large ImageNet data set (Deng et al., 2009) for image classification, proposed by Vinyals et al. (2016) to reduce the engineering efforts to run experiments. The mini data set contains 60 000 colored images of size 84 × 84. There are a total of 100 classes present, each accorded by 600 examples. tieredImageNet (Ren et al., 2018) is another variation of the large ImageNet data set. It is similar to miniImageNet, but contains a hierarchical structure. That is, there are 34 classes, each with its own sub-classes.
• CIFAR-10 and CIFAR-100 (Krizhevsky, 2009): Two other image recognition data sets. Each one contains 60K RGB images of size 32 × 32. CIFAR-10 and CIFAR-100 contain 10 and 100 classes respectively, with a uniform number of examples per class (6 000 and 600 respectively). Every class in CIFAR-100 also has a super-class, of which there are 20 in the full data set. Many variants of the CIFAR data sets can be sampled, giving rise to e.g. CIFAR-FS (Bertinetto et al., 2019) and FC-100 (Oreshkin et al., 2018).
• MNIST (LeCun et al., 2010): MNIST presents a hand-written digit recognition task, containing ten classes (for digits 0 through 9). In total, the data set is split into a 60K train and 10K test gray scale images of hand-written digits.
• Meta-Dataset (Triantafillou et al., 2020): This data set comprises several other data sets such as Omniglot (Lake et al., 2011), CUB-200 (Wah et al., 2011, ImageNet (Deng et al., 2009), and more (Triantafillou et al., 2020). An episode is then constructed by sampling a data set (e.g. Omniglot), selecting a subset of labels to create train and test splits as before. In this way, broader generalization is enforced since the tasks are more distant from each other.
• Meta-world : A meta reinforcement learning data set, containing 50 robotic manipulation tasks (control a robot arm to achieve some pre-defined goal, e.g. unlocking a door, or playing soccer). It was specifically designed to cover a broad range of tasks, such that meaningful generalization can be measures .

Some Applications of Meta-Learning
Deep neural networks have achieved remarkable results on various tasks from image recognition, text processing, game playing to robotics (Silver et al., 2016;Mnih et al., 2013;Wu Figure 3: Learning continuous robotic control tasks is an important application of Deep Meta-Learning techniques. Image taken from (Yu et al., ). et al., 2016, but their success depends on the amount of available data (Sun et al., 2017) and computing resources. Deep meta-learning reduces this dependency by allowing deep neural nets to learn new concepts quickly. As a result, meta-learning widens the applicability of deep learning techniques to many application domains. Such areas include few-shot image classification (Finn et al., 2017;Snell et al., 2017;Ravi and Larochelle, 2017), robotic control policy learning (Gupta et al., 2018;Nagabandi et al., 2019) (see Figure 3), hyperparameter optimization (Antoniou et al., 2019;Schmidhuber et al., 1997), meta-learning learning rules (Bengio et al., 1991(Bengio et al., , 1997Miconi et al., 2018Miconi et al., , 2019, abstract reasoning (Barrett et al., 2018), and many more. For a larger overview of applications, we refer interested readers to Hospedales et al. (2020).

The Meta-Learning Field
As mentioned in the introduction, meta-learning is a broad area of research, as it encapsulates all techniques that leverage prior learning experience to learn new tasks more quickly (Vanschoren, 2018). We can classify two distinct communities in the field with a different focus: i) algorithm selection and hyperparameter optimization for machine learning techniques, and ii) search for inductive bias in deep neural networks. We will refer to these communities as group i) and group ii) respectively. Now, we will give a brief description of the first field, and a historical overview of the second. Group i) uses a more traditional approach, to select a suitable machine learning algorithm and hyperparameters for a new data set D (Peng et al., 2002). This selection can for example be made by leveraging prior model evaluations on various data sets D , and by using the model which achieved the best performance on the most similar data set (Vanschoren, 2018). Such traditional approaches require (large) databases of prior model evaluations, for many different algorithms. This has led to initiatives such as OpenML (Vanschoren et al., 2014), where researchers can share such information. However, the success of deep neural networks poses a problem for such techniques as storing entire neural architectures, with weights and activation functions, etc. is quite impractical.
Driven by advances in neural networks another approach, taken by group ii), is to adopt the view of a self-improving agent, which improves its learning ability over time by finding a good inductive bias (a set of assumptions that guide predictions). We now present a brief historical overview of developments in this field of Deep Meta-Learning, based on Hospedales et al. (2020).
Pioneering work was done by Schmidhuber (1987) and Hinton and Plaut (1987). Schmidhuber developed a theory of self-referential learning, where the weights of a neural network can serve as input to the model itself, which then predicts updates (Schmidhuber, 1987(Schmidhuber, , 1993. In that same year, Hinton and Plaut (1987) proposed to use two weights per neural network connection, i.e., slow and fast weights, which serve as long-and short-term memory respectively. Later came the idea of meta-learning learning rules (Bengio et al., 1991(Bengio et al., , 1997. Meta-learning techniques that use gradient-descent and backpropagation were proposed by  and . These two works have been pivotal to the current field of Deep Meta-Learning, as the majority of techniques rely on backpropagation, as we will see on our journey of contemporary Deep Meta-Learning techniques. We will now cover the three categories metric-, model-, and optimization-based techniques, respectively.

Overview of the rest of this Work
In the remainder of this work, we will look in more detail at individual meta-learning methods. As indicated before, the techniques can be grouped into three main categories (Vinyals, 2017), namely i) metric-, ii) model-, and iii) optimization-based methods. We will discuss them in sequence.
To help give an overview of the methods, we draw your attention to the following tables. Table 2 summarizes the three categories, and provides key ideas, strengths and weaknesses of the approaches. The terms and technical details are explained more fully in the remainder of this paper. Table 3 contains an overview of all techniques that are discussed further on.

Metric-based Meta-Learning
At a high level, the goal of metric-based techniques is to acquire among others metaknowledge ω in the form of a good feature space that can be used for various new tasks. In the context of neural networks, this feature space coincides with the weights θ of the networks. Then, new tasks can be learned by comparing new inputs to example inputs (of which we know the labels) in the meta-learned feature space. The higher the similarity between a new input and an example, the more likely it is that the new input will have the same label as the example input.
Metric-based techniques are a form of meta-learning as they leverage their prior learning experience (meta-learned feature space) to 'learn' new tasks more quickly. Here, 'learn' is

Metric
Model Optimization Key idea Input similarity Internal task representation Optimize for fast adaptation + Simple and effective + Flexible + More robust generalizability Weakness -Limited to supervised learning -Weak generalization -Computationally expensive Table 2: High-level overview of the three Deep Meta-Learning categories, i.e., i) metric-, ii) model-, and iii) optimization-based techniques, and their main strengths and weaknesses. Recall that T j is a task, D tr T j the corresponding training set, k θ (x, x i ) a kernel function returning the similarity between the two inputs x and x i , y i are true labels for known inputs x i , θ are base-learner parameters, and g ϕ is a (learned) optimizer with parameters ϕ.
used in a non-standard way since metric-based techniques do not make any network changes when presented with new tasks, as they rely solely on input comparisons in the already metalearned feature space. These input comparisons are a form of non-parametric learning, i.e., new task information is not absorbed into the network parameters.
More formally, metric-based learning techniques aim to learn a similarity kernel, or equivalently, attention mechanism k θ (parameterized by θ), that takes two inputs x 1 and x 2 , and outputs their similarity score. Larger scores indicate larger similarity. Class predictions for new inputs x can then be made by comparing x to example inputs x i , of which we know the true labels y i . The underlying idea being that the larger the similarity between x and x i , the more likely it becomes that x also has label y i .
Given a task T j = (D tr T j , D test T j ) and an unseen input vector x ∈ D test T j , a probability distribution over classes Y is computed/predicted as a weighted combination of labels from the support set D tr T j , using similarity kernel k θ , i.e., Importantly, the labels y i are assumed to be one-hot encoded, meaning that they are represented by zero vectors with a '1' on the position of the true class. For example, suppose there are five classes in total, and our example x 1 has true class 4. Then, the one-hot encoded label is y 1 = [0, 0, 0, 1, 0]. Note that the probability distribution P θ (Y |x, D tr T j ) over classes is a vector of size |Y |, in which the i-th entry corresponds to the probability that input x has class Y i (given the support set). The predicted class is thusŷ = arg max i=1,2,...,|Y | P θ (Y |x, S) i , where P θ (Y |x, S) i is the computed probability that input x has class Y i .

Example
Suppose that we are given a task T j = (D tr T j , D test T j ). Furthermore, suppose that D tr  Approx. higher-order gradients, independent of optimization path 1, 2 Meta-SGD Learn both the initialization and updates 1, 2 Reptile Move initialization towards task-specific updated weights 1,2 LEO Optimize in lower-dimensional latent parameter space 2,3 Online MAML Accumulate task data for MAML-like training 4,8 LLAMA Maintain probability distribution over post-update parameters θ j 2 PLATIPUS Learn a probability distribution over weight initialization θ -BMAML Learn multiple initializations Θ, jointly optimized by SVGD 2 Diff. solvers Learn input embeddings for simple base-learners 1,2,3,4,5  simplicity, the example will not use an embedding function, which maps example inputs onto an (more informative) embedding space. Now, our test set only contains one example Then, the goal is to predict the correct label for new input [4, 0.5] using only examples in D tr T j . The problem is visualized in Figure 4, where red vectors correspond to example inputs from our training set. The blue vector is the new input that needs to be classified. Intuitively, this new input is most similar to the vector [6, 0], which means that we expect the label for the new input to be the same as that for [6, 0], i.e., 4. Now, suppose we use a fixed similarity kernel, namely the cosine similarity, i.e., k(x, Here, v n denotes the n-th element of placeholder vector v (substitute v by x or x i ). We can now compute the cosine similarity between the new input [4, 0.5] and every example input x i , as done in Table 4 Note that this is not really a probability distribution. That would require normalization such that every element is at least 0 and the sum of all elements is 1. For the sake of this example, we do not perform this normalization, as it is clear that class 4 (the class of the most similar example input [6, 0]) will be predicted.
One may wonder why such techniques are meta-learners, for we could take any single data set D and use pair-wise comparisons to compute predictions. Now, at the outerlevel, metric-based meta-learners are trained on a distribution of different tasks, in order to learn (among others) a good input embedding function. This embedding function facilitates inner-level learning, which is achieved through pair-wise comparisons. As such, one learns  an embedding function across tasks to facilitate task-specific learning, which is equivalent to "learning to learn", or meta-learning. After this introduction to metric-based methods, we will now cover some key metricbased techniques.

Siamese Neural Networks
A Siamese neural network (Koch et al., 2015) consists of two neural networks f θ that share the same weights θ. Siamese neural networks take two inputs x 1 , x 2 , and compute two hidden states f θ (x 1 ), f θ (x 2 ), corresponding to the activation patterns in the final hidden layers. These hidden states are fed into a distance layer, which computes a distance vector where d i is the absolute distance between the i-th elements of f θ (x 1 ) and f θ (x 2 ). From this distance vector, the similarity between x 1 , x 2 is computed as σ(α T d), where σ is the sigmoid function (with output range [0,1]), and α is a vector of free weighting parameters, determining the importance of each d i . This network structure can be seen in Figure 5. Koch et al. (2015) applied this technique to few-shot image recognition in two stages. In the first stage, they train the twin network on an image verification task, where the goal is to output whether two input images x 1 and x 2 have the same class. The network is thus stimulated to learn discriminative features. In the second stage, where the model is confronted with a new task, the network leverages its prior learning experience. That is, given a task T j = (D tr T j , D test T j ), and previously unseen input x ∈ D test T j , the predicted clasŝ y is equal to the label y i of the example (x i , y i ) ∈ D tr T j which yields the highest similarity score to x. In contrast to other techniques mentioned further in this section, Siamese neural networks do not directly optimize for good performance across tasks (consisting of support and query sets). However, they do leverage learned knowledge from the verification task to learn new tasks quicker.
In summary, Siamese neural networks are a simple and elegant approach to perform few-shot learning. However, they are not readily applicable outside the supervised learning setting.

Matching Networks
Matching networks (Vinyals et al., 2016) build upon the idea that underlies Siamese neural networks (Koch et al., 2015). That is, they leverage pair-wise comparisons between the given support set D tr (for a task T j ), and new inputs x ∈ D test T j from the query/test set which we want to classify. However, instead of assigning the class y i of the most similar example input x i , matching networks use a weighted combination of all example labels y i in the support set, based on the similarity of inputs x i to new input x. More specifically, predictions are computed as follows:ŷ = m i=1 a(x, x i )y i , where a is a non-parametric (nontrainable) attention mechanism, or similarity kernel. This classification process is shown in Figure 6. In this figure, the input to f θ has to be classified, using the support set D tr T j (input to g θ ).
The attention that is used consists of a softmax over the cosine similarity c between the input representations, i.e., where f φ and g ϕ are neural networks, parameterized by φ and ϕ, that map raw inputs to a (lower-dimensional) latent vector, which corresponds to the output of the final hidden layer of a neural network. As such, neural networks act as embedding functions. Now, the larger the cosine similarity between the embeddings of x and x i , the larger a(x, x i ), and thus the influence of label y i on the predicted labelŷ for input x. Vinyals et al. (2016) propose two main choices for the embedding functions. The first is to use a single neural network, granting us θ = φ = ϕ and thus f φ = g ϕ . This setup is the default form of matching networks, as shown in Figure 6. The second choice is to make f φ and g ϕ dependent on the support set D tr T j using Long Short-Term Memory networks (LSTMs). In that case, f φ is represented by an attention LSTM, and g ϕ by a bidirectional one. This choice for embedding functions is called Full Context Embeddings (FCE), and yielded an accuracy improvement of roughly 2% on miniImageNet compared to the regular matching networks, indicating that task-specific embeddings can aid the classification of new data points from the same distribution.
Matching networks learn a good feature space across tasks for making pair-wise comparisons between inputs. In contrast to Siamese neural networks (Koch et al., 2015), this feature space (given by weights θ) is learned across tasks, instead of on a distinct verification task.
In summary, matching networks are an elegant and simple approach to metric-based meta-learning. However, these networks are not readily applicable outside of supervised learning settings, and suffer in performance when label distributions are biased (Vinyals et al., 2016).

Prototypical Networks
Just like Matching nets (Vinyals et al., 2016), prototypical nets (Snell et al., 2017) base their class predictions on the entire support set D tr T j . However, instead of computing the similarity between new inputs and examples in the support set, prototypical nets only compare new inputs to class prototypes (centroids), which are single vector representations of classes in some embedding space. Since there are less (or equal) class prototypes than the number of examples in the support set, the amount of required pair-wise comparisons decreases, saving computational costs.
The underlying idea of class prototypes is that for a task T j , there exists an embedding function that maps the support set onto a space where class instances cluster nicely around the corresponding class prototypes (Snell et al., 2017). Then, for a new input x, the class of the prototype nearest to that input will be predicted. As such, prototypical nets perform nearest centroid/prototype classification in a meta-learned embedding space. This is visualized in Figure 7.
More formally, given a distance function d : X × X → [0, +∞) (e.g. Euclidean distance) and embedding function f θ , parameterized by θ, prototypical networks compute class prob- where c k is the prototype/centroid for class k and y i are the classes in the support set D tr T j . Here, a class prototype for class k is defined as the average of all vectors x i in the support set such that y i = k. Thus, classes with prototypes that are nearer to the new input x obtain larger probability scores. Snell et al. (2017) found that the squared Euclidean distance function as d gave rise to the best performance. With that distance function, prototypical networks can be seen as linear models. To see this, note that The first term does not depend on the class k, and does thus not affect the classification decision. The remainder can be written as w Note that this is linear in the output of network f θ , not linear in the input of the network x. Also, Snell et al. (2017) show that prototypical nets (coupled with Euclidean distance) are equivalent to matching nets in one-shot learning settings, as every example in the support set will be its own prototype. In short, prototypical nets save computational costs by reducing the required number of pair-wise comparisons between new inputs and the support set, by adopting the concept of class prototypes. Additionally, prototypical nets were found to outperform matching nets (Vinyals et al., 2016) in 5-way, k-shot learning for k = 1, 5 on Omniglot (Lake et al., 2011) and miniImageNet (Vinyals et al., 2016), even though they do not use complex task-specific embedding functions. Despite these advantages, prototypical nets are not readily applicable outside of supervised learning settings. from the support set D tr T j (the five example inputs on the left), and the query input (below the f ϕ block). All support set embeddings f ϕ (x i ) are then concatenated to the query embedding f ϕ (x). These concatenated embeddings are passed into a relation network g φ , which computes a relation score for every pair (x i , x). The class of the input x i that yields the largest relation score is then predicted. Source: Sung et al. (2018).

Relation Networks
In contrast to previously discussed metric-based techniques, Relation networks (Sung et al., 2018) employ a trainable similarity metric, instead of a pre-defined one (e.g. cosine similarity as used in matching nets (Vinyals et al., 2016)). More specifically, matching nets consist of two chained, neural network modules: the embedding network/module f ϕ which is responsible for embedding inputs, and the relation network g φ which computes similarity scores between new inputs x and example inputs x i of which we know the labels. A classification decision is then made by picking the class of the example input which yields the largest relation score (or similarity). Note that Relation nets thus do not use the idea of class prototypes, and simply compare new inputs x to all example inputs x i in the support set, as done by, e.g., matching networks (Vinyals et al., 2016).
More formally, we are given a training set D tr T j with some examples (x i , y i ), and a new (previously unseen) input x. Then, for every combination (x, x i ), the Relation network produces a concatenated embedding [f ϕ (x), f ϕ (x i )], which is vector obtained by concatenating the respective embeddings of x and x i . This concatenated embedding is then fed into the relation module g φ . Finally, g φ computes the relation score between x and x i as The predicted class is thenŷ = y arg max i r i . This entire process is shown in Figure 8. Remarkably enough, Relation nets use the Mean-Squared Error (MSE) of the relation scores, rather than the more standard cross-entropy loss. The MSE is then propagated backwards through the entire architecture ( Figure 8).
The key advantage of Relation nets is their expressive power, induced by the usage of a trainable similarity function. This expressivity makes this technique very powerful. As a result, it yields better performance than previously discussed techniques that use a fixed similarity metric.

Graph Neural Networks
Graph neural networks (Garcia and Bruna, 2017) use a more general and flexible approach than previously discussed techniques for N -way, k-shot classification. As such, graph neural networks subsume Siamese (Koch et al., 2015) and prototypical networks (Snell et al., 2017). The graph neural network approach represents each task T j as a fully-connected graph G = (V, E), where V is a set of nodes/vertices and E a set of edges connecting nodes. In this graph, nodes v i correspond to input embeddings f θ (x i ), concatenated with their onehot encoded labels For inputs x from the query/test set (for which we do not have the labels), a uniform prior over all N possible labels is used: y = [ 1 N , . . . , 1 N ]. Thus, each node contains an input and label section. Edges are weighted links that connect these nodes.
The graph neural network then propagates information in the graph using a number of local operators. The underlying idea is that label information can be transmitted from nodes of which we do have the labels, to nodes for which we have to predict labels. Which local operators are used, is out of scope for this paper, and the reader is referred to Garcia and Bruna (2017) for details.
By exposing the graph neural network to various tasks T j , the propagation mechanism can be altered to improve the flow of label information in such a way that predictions become more accurate. As such, in addition to learning a good input representation function f θ , graph neural networks also learn to propagate label information from labeled examples to unlabeled inputs.
Graph neural networks achieve good performance in few-shot settings (Garcia and Bruna, 2017), and are also applicable in semi-supervised and active learning settings.

Attentive Recurrent Comparators
Attentive recurrent comparators (Shyam et al., 2017) differ from previously discussed techniques as they do not compare inputs as a whole, but by parts. This approach is inspired by how humans would make a decision concerning the similarity of objects. That is, we shift our attention from one object to the other, and move back and forth to take glimpses of different parts of both objects. In this way, information of two objects is fused from the beginning, whereas other techniques (e.g., matching networks (Vinyals et al., 2016) and graph neural networks (Garcia and Bruna, 2017)) only combine information at the end (after embedding both images) (Shyam et al., 2017).
Given two inputs x i and x, we feed them in interleaved fashion repeatedly into a recurrent neural network (controller): x i , x, . . . , x i , x. Thus, the image at time step t is given by I t = x i if t is even else x. Then, at each time step t, the attention mechanism focuses on a square region of the current image: G t = attend(I t , Ω t ), where Ω t = W g h t−1 are attention parameters, which are computed from the previous hidden state h t−1 . The next hidden state h t+1 = RNN(G t , h t−1 ) is given by the glimpse at time t, i.e., G t , and the previous hidden state h t−1 . The entire sequence consists of g glimpses per image. After this sequence is fed into the recurrent neural network (indicated by RNN(•)), the final hidden state h 2g is used as combined representation of x i relative to x. This process is summarized in Figure 9. Classification decisions can then be made by feeding the combined representations into a classifier. Optionally, the combined representations can be processed by bi-directional LSTMs before passing them to the classifier.
The attention approach is biologically inspired, and biologically plausible. A downside of attentive recurrent comparators is the higher computational cost, while the performance is often not better than less biologically plausible techniques, such as graph neural networks (Garcia and Bruna, 2017).

Metric-based Techniques, in conclusion
In this section, we have seen various metric-based techniques. The metric-based techniques meta-learn an informative feature space that can be used to compute class predictions based on input similarity scores.
Key advantages of these techniques are that i) the underlying idea of similarity-based predictions is conceptually simple, and ii) they can be fast at test-time when tasks are small, as the networks do not need to make task-specific adjustments. However, when tasks at meta-test time become more distant from the tasks that were used at meta-train time, metric-learning techniques are unable to absorb new task information into the network weights. Consequently, performance may suffer. Furthermore, when tasks become larger, pair-wise comparisons may become computationally expensive. Lastly, most metric-based techniques rely on the presence of labeled examples, which make them inapplicable outside of supervised learning settings.

Model-based Meta-Learning
A different approach to Deep Meta-Learning is the model-based approach. On a high level, model-based techniques rely upon an adaptive, internal state, in contrast to metric-based techniques, which generally use a fixed neural network at test-time.
More specifically, model-based techniques maintain a stateful, internal representation of a task. When presented with a task, a model-based neural network processes the support/train set in sequential fashion. At every time step, an input enters, and alters the internal state of the model. Thus, the internal state can capture relevant task-specific information, which can be used to make predictions for new inputs.
Because the predictions are based on internal dynamics that are hidden from the outside, model-based techniques are also called black-boxes. Information from previous inputs must be remembered, which is why model-based techniques have a memory component, either inor externally.
Recall that the mechanics of metric-based techniques were limited to pair-wise input comparisons. This is not the case for model-based techniques, where the human designer has the freedom to choose the internal dynamics of the algorithm. As a result, model-based techniques are not restricted to meta-learning good feature spaces, as they can also learn internal dynamics, used to process and predict input data of tasks.
More formally, given a support set D tr T j corresponding to task T j , model-based techniques compute a class probability distribution for a new input x as where f represents the black-box neural network model, and θ its parameters.

Example
Using the same example as in Section 3, suppose we are given a task training set D tr  Figure 4 (in Section 3). Now, for the sake of the example, we do not use an input embedding function: our model will operate on the raw inputs of D tr T j and D test T j . As an internal state, our model uses an external memory matrix M ∈ R 4×(2+1) , with four rows (one for each example in our support set), and three columns (the dimensionality of input vectors, plus one dimension for the correct label). Our model proceeds to process the support set in sequential fashion, reading the examples from D tr T j one by one, and by storing the i-th example in the i-th row of the memory module. After processing the support set, the memory matrix contains all examples, and as such, serves as internal task representation. Now, given the new input [4, 0.5], our model could use many different techniques to make a prediction based on this representation. For simplicity, assume that it computes the dot Figure 10: Workflow of recurrent meta-learners in reinforcement learning contexts. As mentioned in Section 2.1.3, s t , r t , and d t denote the state, reward, and termination flag at time step t. h t refers to the hidden state at time t. Source: Duan et al. (2016).
product between x, and every memory M (i) (the 2-D vector in the i-th row of M , ignoring the correct label), and predicts the class of the input which yields the largest dot product. This would produce scores −2, −10, −6, and 24 for the examples in D tr T j respectively. Since the last example [6, 0] yields the largest dot product, we predict that class, i.e., 4.
This example was deliberately easy for illustrative purposes. More advanced and successful techniques have been proposed, which we will now cover.

Recurrent Meta-Learners
Recurrent meta-learners (Duan et al., 2016; are, as the name suggests, meta-learners based on recurrent neural networks. The recurrent network serves as dynamic task embedding storage. These recurrent meta-learners were specifically proposed for reinforcement learning problems, hence we will explain them in that setting. Now, the recurrence is implemented by e.g. an LSTM (Wang et al., 2016) or a GRU (Duan et al., 2016). The internal dynamics of the chosen Recurrent Neural Network (RNN) allows for fast adaptation to new tasks, while the algorithm used to train the recurrent net gradually accumulates knowledge about the task structure, where each task is modelled as an episode (or set of episodes). Now, the idea of recurrent meta-learners is quite simple. That is, given a task T j , we simply feed the (potentially processed) environment variables [s t+1 , a t , r t , d t ] (see Section 2.1.3) into an RNN at every time step t. Recall that s, a, r, d denote the state, action, reward, and termination flag respectively. At every time step t, the RNN outputs an action and a hidden state. Conditioned on its hidden state h t , the network outputs an action a t . The goal is to maximize the expected reward in each trial. See Figure 10 for a visual depiction. From this figure, it also becomes clear why these techniques are model-based. That is, they embed information from previously seen inputs in the hidden state. Recurrent meta-learners have shown to perform almost as well as asymptotically optimal algorithms on simple reinforcement learning tasks Duan et al., 2016). However, performance suffers in more complex settings, where temporal dependencies can span a longer horizon. Making recurrent meta-learners better at such complex tasks is a direction for future research.

Memory-Augmented Neural Networks (MANNs)
The key idea of memory-augmented neural networks (Santoro et al., 2016) is to enable neural networks to learn quickly with the help of an external memory. The main controller (the recurrent neural network interacting with the memory) then gradually accumulates knowledge across tasks, while the external memory allows for quick task-specific adaptation. For this, Santoro et al. (2016) used Neural Turing Machines (Graves et al., 2014). Here, the controller is parameterized by θ and acts as the long-term memory of the memory-augmented neural network, while the external memory module is the short-term memory.
The workflow of memory-augmented neural networks is displayed in Figure 11. Note that the data from a task is processed as a sequence, i.e., data are fed into the network one by one. The support/train set is fed into the memory-augmented neural network first. Afterwards, the query/test set is processed. During the meta-train phase, train tasks can be fed into the network in arbitrary order. At time step t, the model receives input x t with the label of the previous input, i.e., y t−1 . This was done to prevent the network from mapping class labels directly to the output (Santoro et al., 2016).
- Figure 11: Workflow of memory-augmented neural networks. Here, an episode corresponds to a given task T j . After every episode, the order of labels, classes, and samples should be shuffled to minimize dependence on arbitrarily assigned orders. Source: Santoro et al. (2016). The interaction between the controller and memory is visualized in Figure 12. The idea is that the external memory module, containing representations of previously seen inputs, can be used to make predictions for new inputs. In short, previously obtained knowledge is leveraged to aid the classification of new inputs. Note that vanilla neural networks also attempt to do this, however, their prior knowledge is slowly accumulated into the network weights, while an external memory module can directly store such information. Now, given an input x t at time t, the controller generates a key k t , which can be stored in memory matrix M and can be used to retrieve previous representations from memory matrix M . When reading from memory, the aim is to produce a linear combination of stored keys in memory matrix M , giving greater weight to those which have a larger cosine similarity with the current key k t . More specifically, a read vector w r t is created, in which each entry i denotes the cosine similarity between key k t and the memory (from a previous input) stored in row i, i.e., M t (i). Then, the representation r t = i w r t (i)M (i) is retrieved, which is simply a linear combination of all keys (i.e., rows) in memory matrix M .
Predictions are made as follows. Given an input x t , memory-augmented neural networks use the external memory to compute the corresponding representation r t , which could be fed into a softmax layer, resulting in class probabilities. Across tasks, memory-augmented neural networks learn a good input embedding function f θ and classifier weights, which can be exploited when presented with new tasks.
To write input representations to memory, Santoro et al. (2016) propose a new mechanism called Least Recently Used Access (LRUA). LRUA either writes to the least, or most recently used memory location. In the former case, it preserves recent memories, and in the latter it updates recently obtained information. The writing mechanism works by keeping track of how often every memory location is accessed in a usage vector w u t , which is updated at every time step according to the following update rule: w u t := γw u t−1 + w r t + w w t , where superscripts u, w and r refer to usage, write and read vectors, respectively. In words, the previous usage vector is decayed (using parameter γ), while current reads (w r t ) and writes (w w t ) are added to the usage. Now, let n be the total number of reads to memory, and u(n) ( u for 'least used') be the n-th smallest value in the usage vector w u t . Then, the least-used weights are defined as follows: Then, the write vector w w t is computed as w w t = σ(α)w r t−1 + (1 − σ(α))w u t−1 , where α is a parameter that interpolates between the two weight vectors. As such, if σ(α) = 1, we write to the most recently used memory, whereas when σ(α) = 0, we write to the least recently used memory locations. Finally, writing is performed as follows: In summary, memory-augmented neural networks (Santoro et al., 2016) combine external memory and a neural network to achieve meta-learning. The interaction between a controller, with long-term memory parameters θ, and memory M , may also be interesting for studying human meta-learning (Santoro et al., 2016). In contrast to many metric-based techniques, this model-based technique is applicable to both classification and regression problems. A downside of this approach is the architectural complexity.

Meta Networks
Meta networks are divided into two distinct subsystems (consisting of neural networks), i.e., the base-and meta-learner (whereas in memory-augmented neural networks the baseand meta-components are intertwined). Now, the base-learner is responsible for performing tasks, and for providing the meta-learner with meta-information, such as loss gradients. The meta-learner can then compute fast task-specific weights for itself and the base-learner, such that it can perform better on the given task T j = (D tr T j , D test T j ). This workflow is depicted in Figure 13.
The meta-learner consists of neural networks u φ , m ϕ , and d ψ . Network u φ is used as input representation function. Networks d ψ and m ϕ are used to compute task-specific weights φ * and example-level fast weights θ * . Lastly, b θ is the base-learner which performs input predictions. Note that we used the term fast-weights throughout, which refers to taskor input-specific versions of slow (initial) weights.
In similar fashion to memory-augmented neural networks (Santoro et al., 2016), meta networks (Munkhdalai and Yu, 2017) also leverage the idea of an external memory module. However, meta networks use the memory for a different purpose. The memory stores for each observation x i in the support set two components, i.e., its representation r i and the fast weights θ * i . These are then used to compute a attention-based representation and fast weights for new inputs, respectively.
Algorithm 1 Meta networks, by Munkhdalai and Yu (2017) Store θ * i in i-th position of example-level weight memory M 10: Store r i in i-th position of representation memory R 12: end for 13: L task = 0 14: for (x, y) ∈ D test T j do 15: L task = L task + error(b θ,θ * (x), y) 19: end for 20: Update Θ = {θ, φ, ψ, ϕ} using ∇ Θ L task The pseudocode for meta networks is displayed in Algorithm 1. First, a sample of the support set is created (line 1), which is used to compute task-specific weights φ * for the representation network u φ (lines 2-5). Note that u φ has two tasks, i) it should compute a representation for inputs (x i (line 10 and 15), and ii) it needs to make predictions for inputs (x i , in order to compute a loss (line 3). To achieve both goals, a conventional neural network can be used that makes class predictions. The states of the final hidden layer are then used as representation. Typically, the cross entropy is calculated over the predictions of representation network u φ . When there are multiple examples per class in the support set, an alternative is to use a contrastive loss function (Munkhdalai and Yu, 2017).
Then, meta networks iterate over every example (x i , y i ) in the support set D tr T j . The base-learner b θ attempts to make class predictions for these examples, resulting in loss values L i (line 7-8). The gradients of these losses are used to compute fast weights θ * for example i (line 8), which are then stored in the i-th row of memory matrix M (line 9). Additionally, input representations r i are computed and stored in memory matrix R (lines 10-11). Now, meta networks are ready to address the query set D test T j . They iterate over every example (x, y), and compute a representation r of it (line 15). This representation is matched against the representations of the support set, which are stored in memory matrix R. This matching gives us a similarity vector a, where every entry k denotes the similarity between input representation r and the k-th row in memory matrix R, i.e., R(k) (line 16). A softmax over this similarity vector is performed to normalize the entries. The resulting vector is used to compute a linear combination of weights that were generated for inputs in the support set (line 17). These weights θ * are specific for input x in the query set, and can be used by the base-learner b to make predictions for that input (line 18). The observed error is added to the task loss. After the entire query set is processed, all involved parameters can be updated using backpropagation (line 20). In short, meta networks rely on a reparameterization of the meta-and base-learner for every task. Despite the flexibility and applicability to both supervised and reinforcement learning settings, the approach is quite complex. It consists of many components, each with its own set of parameters, which can be a burden on memory-usage and computation time. Additionally, finding the correct architecture for all the involved components can be time consuming.

Simple Neural Attentive Meta-Learner (SNAIL)
Instead of an external memory matrix, SNAIL (Mishra et al., 2018) relies on a special model architecture to serve as memory. Mishra et al. (2018) argue that it is not possible to use Recurrent Neural Networks for this, as they have limited memory capacity, and cannot pinpoint specific prior experiences (Mishra et al., 2018). Hence, SNAIL uses a different architecture, consisting of 1D temporal convolutions (Oord et al., 2016) and a soft attention mechanism (Vaswani et al., 2017). The temporal convolutions allow for 'high band-width' memory access, and the attention mechanism allows to pinpoint specific experiences.  SNAIL consists of three building blocks. The first is the DenseBlock, which applies a single 1D convolution to the input, and concatenates (in the feature/horizontal direction) the result. The second is a TCBlock, which is simply a series of DenseBlocks with exponentially increasing dilation rate of the temporal convolutions (Mishra et al., 2018). Note that the dilation is nothing but the temporal distance between two nodes in a network. For example, if we use a dilation of 2, a node at position p in layer L will receive the activation from node p − 2 from layer L − 1. The third block is the AttentionBlock, which learns to focus on the important parts of prior experience.
In similar fashion to memory-augmented neural networks (Santoro et al., 2016) (Section 4.3), SNAIL also processes task data in sequence, as shown in Figure 15. However, the input at time t is accompanied with the label at time t, instead of t − 1 (as was the case for memory-augmented neural networks). SNAIL learns internal dynamics from seeing various tasks, so that it can make good predictions on the query set, conditioned upon the support set.
A key advantage of SNAIL is that it can be applied to both supervised and reinforcement learning tasks. In addition, it achieves good performance compared to previously discussed techniques. A downside of SNAIL is that finding the correct architecture of TCBlocks and DenseBlocks can be time consuming.

Conditional Neural Processes (CNPs)
In contrast to previous techniques, a conditional neural process (CNP) (Garnelo et al., 2018) does not rely on an external memory module. Instead, it aggregates the support set into a single aggregated latent representation. The general architecture is shown in Figure 16. As we can see, the conditional neural process operates in three phases on task T j . First, it observes the training set D tr T j , including the ground-truth outputs y i . Examples (x i , y i ) ∈ D tr T j are embedded using a neural network h θ into representations r i . Second, these representations are aggregated using operator a to produce a single representation r of D tr T j Figure 17: Neural statistician architecture. Edges are neural networks. All incoming inputs to a node are concatenated.
(hence it is model-based). Third, a neural network g φ processes this single representation r, new inputs x, and produces predictionsŷ. Let the entire conditional neural process model be denoted by Q Θ , where Θ is a set of all involved parameters {θ, φ}. The training process is different compared to other techniques. Let x T j and y T j denote all inputs and corresponding outputs in D tr T j . Then, the first U (0, . . . , k · N − 1) examples in D tr T j are used as a conditioning set D c T j (effectively splitting the train set in a true train set and a validation set). Given a value of , the goal is to maximize the log likelihood (or minimize the negative log likelihood) of the labels y T j in the entire train set D tr Conditional neural processes are trained by repeatedly sampling various tasks and values of , and propagating the observed loss backwards. In summary, conditional neural processes use compact representations of previously seen inputs to aid the classification of new observations. Despite its simplicity and elegance, a disadvantage of this technique is that it is often outperformed in few-shot settings by other techniques such as matching networks (Vinyals et al., 2016) (see Section 3.3).

Neural Statistician
A neural statistician (Edwards and Storkey, 2017) differs from earlier approaches as it learns to compute summary statistics of data sets in an unsupervised manner. These latent embeddings (making the approach model-based) can then later be used for making predictions. Despite the broad applicability of the model, we discuss it in the context of Deep Meta-Learning.
A neural statistician performs both learning and inference. In the learning phase, the model attempts to produce generative modelsP i for every data set D i . The key assumption that is made by Edwards and Storkey (2017) is that there exists a generative process P i , which conditioned on a latent context vector c i , can produce data set D i . At inference time, the goal is to infer a (posterior) probability distribution over the context q(c|D).
The model uses a variational autoencoder, which consists of an encoder and decoder. The encoder is responsible for producing a distribution over latent vectors z: q(z|x; φ), where x is an input vector, and φ are the encoder parameters. The encoded input z, which is often of lower dimensionality than the original input x, can then be decoded by the decoder p(x|z; θ). Here, θ are the parameters of the decoder. To capture more complex patterns in data sets, the model uses multiple latent layers z 1 , ..., z L , as shown in Figure 17. Given this architecture, the posterior over c and z 1 , .., z L (shorthand z 1:L ) is given by The neural statistician is trained to minimize a three-component loss function, consisting of the reconstruction loss (how well it models the data), context loss (how well the inferred context q(c|D; φ) corresponds to the prior P (c), and latent loss (how well the inferred latent variables z i are modelled). This model can be applied to N -way, few-shot learning as follows. Construct N data sets for every of the N classes, such that one data set contains only examples of the same class. Then, the neural statistician is provided with a new input x, and has to predict its class. It computes a context posterior N x = q(c|x; φ) depending on new input x. In similar fashion, context posteriors are computed for all of the data sets N i = q(c|D i ; φ). Lastly, it assigns the label i such that the difference between N i and N x is minimal.
In summary, the Neural Statistician (Edwards and Storkey, 2017) allows for quick learning on new tasks through data set modeling. Additionally, it is applicable to both supervised and unsupervised settings. A downside is that the approach requires many data sets to achieve good performance (Edwards and Storkey, 2017).

Model-based Techniques, in conclusion
In this section, we have discussed various model-based techniques. Despite apparent differences, they all build on the notion of task internalization. That is, tasks are processed and represented in the state of the model-based system. This state can then be used to make predictions.
Advantages of model-based approaches include the flexibility of the internal dynamics of the systems, and their broader applicability compared to most metric-based techniques. However, model-based techniques are often outperformed by metric-based techniques in supervised settings (e.g. graph neural networks (Garcia and Bruna, 2017); Section 3.6), may not perform well when presented with larger data sets (Hospedales et al., 2020), and generalize less well to more distant tasks than optimization-based techniques . We discuss this optimization-based approach next.

Optimization-based Meta-Learning
Optimization-based techniques adopt a different perspective on meta-learning than the previous two approaches. They explicitly optimize for fast learning. Most optimization-based techniques do so by approaching meta-learning as a bi-level optimization problem. At the inner-level, a base-learner makes task-specific updates using some optimization strategy (such as gradient descent). At the outer-level, the performance across tasks is optimized.
More formally, given a task T j = (D tr T j , D test T j ) with new input x ∈ D test T j and base-learner parameters θ, optimization-based meta-learners return where f is the base-learner, g ϕ is a (learned) optimizer that makes task-specific updates to the base-learner parameters θ using the training data D tr T i , and loss function L T j .

Example
Suppose we are faced with a linear regression problem, where every task is associated with a different function f (x). For this example, suppose our model only has two parameters: a and b, which together form the functionf (x) = ax + b. Suppose further that our metatraining set consists of four different tasks, i.e., A, B, C, and D. Then, according to the optimization-based view, we wish to find a single set of parameters {a, b} from which we can quickly learn the optimal parameters for each of the four tasks, as displayed in Figure 18. In fact, this is the intuition behind the popular optimization-based technique MAML (Finn et al., 2017). We will now discuss the core optimization-based techniques in more detail.

LSTM Optimizer
Standard gradient update rules have the form where α is the learning rate, and L T j (θ t ) is the loss function with respect to task T j and network parameters at time t, i.e., θ t . The key idea underlying LSTM optimizers (Andrychowicz Figure 19: Workflow of the LSTM optimizer. Gradients can only propagate backwards through solid edges. f t denotes the observed loss at time step t. Source: Andrychowicz et al. (2016Andrychowicz et al. ( ). et al., 2016 is to replace the update term (−α∇L T j (θ t )) by an update proposed by an LSTM g with parameters ϕ. Then, the new update becomes This new update allows the optimization strategy to be tailored to a specific family of tasks. Note that this is meta-learning, i.e., the LSTM learns to learn. As such, this technique basically learns an update policy.
The loss function used to train an LSTM optimizer is: where T is the number of parameter updates that are made, and w t are weights indicating the importance of performance after t steps. Note that generally we are only interested in the final performance after T steps. However, the authors found that the optimization procedure was better guided by equally weighting the performance after each gradient descent step. As is often done, second-order derivatives (arising from the dependency between the updated weights and the LSTM optimizer) were ignored due to the computational expenses associated with the computation thereof. This loss function is fully differentiable, and thus allows for training an LSTM optimizer (see Figure 19). To prevent a parameter explosion, the same network is used for every coordinate/weight in the base-learner's network, causing the update rule to be the same for every parameter. Of course, the updates depend on their prior values and gradients.
The key advantage of LSTM optimizers is that they can enable faster learning compared to hand-crafted optimizers, also on different data sets than those used to train the optimizer. However, Andrychowicz et al. (2016) did not apply this technique to few-shot learning. In fact, they did not apply it across tasks at all. Thus, it is unclear whether this technique can perform well in few-shot settings, where few data per class are available for training. Furthermore, the question remains whether it can scale to larger base-learner architectures.

LSTM Meta-Learner
Instead of having an LSTM predict gradient updates, Ravi and Larochelle (2017) embed the weights of the base-learner parameters into the cell state (long-term memory component) of the LSTM, giving rise to LSTM meta-learners. As such, the base-learner parameters θ are literally inside the LSTM memory component (cell state). In this way, cell state updates correspond to base-learner parameter updates. This idea was inspired by the resemblance between the gradient and cell state update rules. Now, gradient updates often have the form as shown in Equation 15. The LSTM cell state update rule, in contrast, looks as follows where f t is the forget gate (which determines which information should be forgotten) at time t, represents the element-wise product, c t is the cell state at time t, andc t the candidate cell state for time step t, and α t the learning rate at time step t. Note that if f t = 1 (vector of ones), α t = α, c t−1 = θ t−1 , andc t = −∇ θ t−1 L Tt (θ t−1 ), this update is equivalent to the one used by gradient-descent. This similarity inspired Ravi and Larochelle (2017) to use an LSTM as meta-learner that learns to make updates for a base-learner, as shown in Figure 20. More specifically, the cell state of the LSTM is initialized with c 0 = θ 0 , which will be adjusted by the LSTM to a good common initialization point across different tasks. Then, to update the weights of the base-learner for the next time step t + 1, the LSTM computes c t+1 , and sets the weights of the base-learner equal to that. There is thus a one-to-one correspondence between c t and θ t . The meta-learner's learning rate α t (see Equation 18), is set equal to σ(w α ·[∇ θ t−1 L Tt (θ t−1 ), L Tt (θ t ), θ t−1 , α t−1 ]+b α ), where σ is the sigmoid function. Note that the output is a vector, with values between 0 and 1, which denote the the learning rates for the corresponding parameters. Furthermore, w α and b α are trainable parameters that part of the LSTM meta-learner. In words, the learning rate at any time depends on the loss gradients, the loss value, the previous parameters, and the previous learning rate. The forget gate, f t , determines what part of the cell state should be forgotten, and is computed in a similar fashion, but with different weights.
To prevent an explosion of meta-learner parameters, weight-sharing is used, in similar fashion to LSTM optimizers proposed by Andrychowicz et al. (2016) (Section 5.2). This implies that the same update rule is applied to every weight at a given time step. The exact update, however, depends on the history of that specific parameter in terms of previous learning rate, loss, etc. For simplicity, second-order derivatives were ignored, by assuming the base-learner's loss does not depend on the cell state of the LSTM optimizer. Batch normalization was applied to stabilize and speed up the learning process.
In short, LSTM optimizers can learn to optimize a base-learner by maintaining a oneto-one correspondence over time between the base-learner's weights and the LSTM cell state. This allows the LSTM to exploit commonalities in the tasks, allowing for quicker optimization. However, there are simpler approaches (e.g. MAML (Finn et al., 2017)) that outperform this technique.

Reinforcement Learning Optimizer
Li and Malik (2018) proposed a framework which casts optimization as a reinforcement learning problem. Optimization can then be performed by existing reinforcement learning techniques. Now, at a high-level, an optimization algorithm g takes as input an initial set of weights θ 0 and a task T j with corresponding loss function L T j , and produces a sequence of new weights θ 1 , . . . , θ T , where θ T is the final solution found. On this sequence of proposed new weights, we can define a loss function L that captures unwanted properties (e.g. slow convergence, oscillations, etc.). The goal of learning an optimizer can then be formulated more precisely as follows. We wish to learn an optimal optimizer The key insight is that the optimization can be formulated as a Partially Observable Markov Decision Process (POMDP). Then, the state corresponds to the current set of weights θ t , the action to the proposed update at time step t, i.e., ∆θ t , and the policy to the function that computes the update. With this formulation, the optimizer g can be learned by existing reinforcement learning techniques. In their paper, they used an recurrent neural network as optimizer. At each time step, they feed it observation features, which depend on the previous set of weights, loss gradients, and objective functions, and use guided policy search to train it.
In summary, Li and Malik (2018) made a first step towards general optimization through reinforcement learning optimizers, which were shown able to generalize across network architectures and data sets. However, the base-learner architecture that was used was quite small. The question remains whether this approach can scale to larger architectures. Model-agnostic meta-learning (MAML) (Finn et al., 2017) uses a simple gradient-based inner optimization procedure (e.g. stochastic gradient descent), instead of more complex LSTM procedures or procedures based on reinforcement learning. The key idea of MAML is to explicitly optimize for fast adaptation to new tasks by learning a good set of initialization parameters θ. This is shown in Figure 21: from the learned initialization θ, we can quickly move to the best set of parameters for task T j , i.e., θ * j for j = 1, 2, 3. The learned initialization can be seen as the inductive bias of the model, or simply the set of assumptions (encapsulated in θ) that the model makes with respect to the overall task structure.

MAML
More formally, let θ denote the initial model parameters of a model. The goal is to quickly learn new concepts, which is equivalent to achieving a minimal loss in few gradient update steps. The amount of gradient steps s has to be specified upfront, such that MAML can explicitly optimize for achieving good performance within that number of steps. Suppose we pick only one gradient update step, i.e., s = 1. Then, given a task T j = (D tr T j , D test T j ), gradient descent would produce updated parameters (fast weights) specific to task j. The meta-loss of quick adaptation (using s = 1 gradient steps) across tasks can then be formulated as where p(T ) is a probability distribution over tasks. This expression contains an inner gradient (∇ θ L T j (θ j )). As such, by optimizing this meta-loss using gradient-based techniques, we have to compute second-order gradients. One can easily see this in the computation below where we used L D test T j (θ j ) to denote the derivative of the loss function with respect to the test set, evaluated at the post-update parameters θ j . The term α∇ 2 θ L D tr T j (θ) contains the second-order gradients. The computation thereof is expensive in terms of time and memory costs, especially when the optimization trajectory is large (when using a larger number of gradient updates s per task). Finn et al. (2017) experimented with leaving out secondorder gradients, by assuming ∇ θ θ j = I, giving us First Order MAML (FOMAML, see Equation 22). They found that FOMAML performed reasonably similar to MAML. This means that updating the initialization using only first order gradients T j p(T ) L D test is roughly equal to using the full gradient expression of the meta-loss in Equation 22. One can extend the meta-loss to incorporate multiple gradient steps by substituting θ j by a multi-step variant. Now, MAML is trained as follows. The initialization weights θ are updated by continuously sampling a batch of m tasks B = {T j p(T )} m i=1 . Then, for every task T j ∈ B, an inner update is performed to obtain θ j , in turn granting an observed loss L D test T j (θ j ). These losses across a batch of tasks are used in the outer update The complete training procedure of MAML is displayed in Algorithm 2. At test-time, when presented with a new task T j , the model is initialized with θ, and performs a number of gradient updates on the task data. Note that the algorithm for FOMAML is equivalent to Algorithm 2, except for the fact that the update on line 8 is done differently. That is, FOMAML updates the initialization with the rule θ = θ − β T j p(T ) L D test T j (θ j ). Antoniou et al. (2019), in response to MAML, proposed many technical improvements that can improve training stability, performance, and generalization ability. Improvements include i) updating the initialization θ after every inner update step (instead of after all steps are done) to increase gradient propagation, ii) using second-order gradients only after Algorithm 2 One-step MAML for supervised learning, by Finn et al. (2017) 1: Randomly initialize θ 2: while not done do 3: Sample batch of J tasks B = T 1 , . . . , T J p(T ) 4: end for 8: 9: end while 50 epochs to increase the training speed, iii) learning layer-wise learning rates to improve flexibility, iv) annealing the meta-learning rate β over time, and v) some Batch Normalization tweaks (keep running statistics instead of batch-specific ones, and using per-step biases). MAML has obtained great attention within the field of Deep Meta-Learning, perhaps due to its i) simplicity (only requires two hyperparameters), ii) general applicability, and iii) strong performance. A downside of MAML, as mentioned above, is that it can be quite expensive in terms of running time and memory to optimize a base-learner for every task and compute higher-order derivatives from the optimization trajectories.

iMAML
Instead of ignoring higher-order derivatives (as done by FOMAML), which potentially decreases the performance compared to regular MAML, iMAML  approximates these derivatives in a way that is less memory-consuming. Now, let A denote an inner optimization algorithm (e.g., stochastic gradient descent), which takes a training set D tr T j corresponding to task T j and initial model weights θ, and produces new weights θ j = A(θ, D tr T j ). MAML has to compute the derivative where D test T j is the test set corresponding to task T j . This equation is a simple result of applying the chain rule. Importantly, note that ∇ θ (θ j ) differentiates through A(θ, D tr If Here, λ is a regularization parameter. The reason for this is discussed below.

Combining Equation 24 and Equation 25, we have that
The idea is to obtain an approximate gradient vector g j that is close to this expression, i.e., we want the difference to be small for some small tolerance vector . If we multiply both sides by where absorbed the multiplication factor. We wish to minimize this expression for g j , and that can be performed using optimization techniques such as the conjugate gradient algorithm . This algorithm does not need to store Hessian matrices, which decreases the memory cost significantly. In turn, this allows iMAML to work with more inner gradient update steps. Note, however, that one needs to perform explicit regularization in that case to avoid overfitting. The conventional MAML did not require this, as it uses only a few number of gradient steps (equivalent to an early stopping mechanism).
At each inner loop step, iMAML computes the meta-gradient g j . After processing a batch of tasks, these gradients are averaged and used to update the initialization θ. Since it does not differentiate through the optimization process, we are free to use any other (non-differentiable) inner-optimizer.
In summary, iMAML reduces memory costs significantly as it need not differentiate through the optimization trajectory, also allowing for greater flexibility in the choice of inner optimizer. Additionally, it can account for larger optimization paths. The computational costs stay roughly the same compared to MAML (Finn et al., 2017). Future work could investigate more inner optimization procedures .

Meta-SGD
Meta-SGD (Li et al., 2017), or meta-stochastic gradient descent, is similar to MAML (Finn et al., 2017) (Section 5.5). However, on top of learning an initialization, Meta-SGD also learns learning rates for every model parameter in θ, building on the insight that the optimizer can be seen as trainable entity.
The standard SGD update rule is given in Equation 15. The meta-SGD optimizer uses a more general update, namely where is the element-wise product. Note that this means that alpha (learning rate) is now a vector hence the bold font instead of scalar, which allows for greater flexibility in the sense that each parameter has its own learning rate. The goal is to learn the initialization θ, and learning rate vector α, such that the generalization ability is as large as possible.
More mathematically precise, the learning objective is where we used a simple substitution for θ j . L D train , which can be observed during the meta-training phase).
The learning process is visualized in Figure 22. Note that the meta-SGD optimizer is trained to maximize generalization ability after only one update step. Since this learning objective has a fully differentiable loss function, the meta-SGD optimizer itself can be trained using standard SGD. In summary, Meta-SGD is more expressive than MAML as it does not only learn an initialization, but also learning rates per parameter. This, however, does come at the cost of an increased number of hyperparameters.

Reptile
Reptile (Nichol et al., 2018) is another optimization-based technique that, like MAML (Finn et al., 2017), solely attempts to find a good set of initialization parameters θ. The way in which Reptile attempts to find this initialization is quite different from MAML. It repeatedly samples a task, trains on the task, and moves the model weights towards the trained weights (Nichol et al., 2018). Algorithm 3 displays the pseudocode describing this simple process. Nichol et al. (2018) note that it is possible to treat (θ −θ j )/α as gradients, where α is the learning rate of the inner stochastic gradient descent optimizer (line 4 in the pseudocode), and to feed that into a meta-optimizer (e.g. Adam). Moreover, instead of sampling one task at a time, one could sample a batch of n tasks, and move the initialization θ towards the average update directionθ = 1 n n j=1 (θ j − θ), granting the update rule θ := θ + θ . The intuition behind Reptile is that updating the initialization weights towards updated parameters will grant a good inductive bias for tasks from the same family. By performing Algorithm 3 Reptile, by Nichol et al. (2018) 1: Initialize θ 2: for i = 1, 2, . . . do

3:
Sample task T j = (D tr T j , D test T j ) and corresponding loss function L T j 4: Perform k gradient update steps to get θ j 5: Move initialization point θ towards θ j 6: end for Taylor expansions of the gradients of Reptile and MAML (both first-order and second-order), Nichol et al. (2018) show that the expected gradients differ in their direction. They argue, however, that in practice, the gradients of Reptile will also bring the model towards a point minimizing the expected loss over tasks.
A mathematical argument as to why Reptile works goes as follows. Let θ denote the initial parameters, and θ * j the optimal set of weights for task T j . Lastly, let d be the Euclidean distance function. Then, the goal is to minimize the distance between the initialization point θ and the optimal point θ * j , i.e., The gradient of this expected distance with respect to the initialization θ is given by where we used the fact that the gradient of the squared Euclidean distance between two points x 1 and x 2 is the vector 2(x 1 −x 2 ). Nichol et al. (2018) go on to argue that performing gradient descent on this objective would result in the following update rule Since we do not know θ * T j , one can approximate this by term by k steps of gradient descent SGD(L T j , θ, k). In short, Reptile can be seen as gradient descent on the distance minimization objective given in Equation 31. A visualization is shown in Figure 23. The initialization θ is moving towards the optimal weights for tasks 1 and 2 in interleaved fashion (hence the oscillations).
In conclusion, Reptile is an extremely simple meta-learning technique, which does not need to differentiate through the optimization trajectory like, e.g., MAML (Finn et al., 2017), saving time and memory costs. However, the theoretical foundation is a bit weaker, and performance may be a bit worse than that of MAML. Figure 23: Schematic visualization of Reptile's learning trajectory. Here, θ * 1 and θ * 2 are the optimal weights for tasks T 1 and T 2 respectively. The initialization parameters θ oscillate between these. Adapted from Nichol et al. (2018). Latent Embedding Optimization, or LEO, was proposed by Rusu et al. (2018) to combat an issue of gradient-based meta-learners, such as MAML (Finn et al., 2017) (see Section 5.5), in few-shot settings (N -way, k-shot). These techniques operate in a high-dimensional parameter space using gradient information from only few examples, which could lead to poor generalization. LEO alleviates this issue by learning a lower-dimensional latent embedding space, which indirectly allows us to learn a good set of initial parameters θ. Additionally, the embedding space is conditioned upon tasks, allowing for more expressivity. In theory LEO could find initial parameters for the entire base-learner network, but the authors only experimented with setting the parameters for the final layers.

LEO
The complete workflow of LEO is shown in Figure 24. As we can see, given a task T j , the corresponding train set D tr T j is fed into an encoder, which produces hidden codes for each example in that set. These hidden codes are paired and concatenated in every possible manner, granting us (N k) 2 pairs, where N is the number of classes in the training set, and k the number of examples per class. These paired codes are then fed into a relation net (Sung et al., 2018) (see Section 3.5). The resulting embeddings are grouped by class, and parameterize a probability distribution over latent codes z n (for class n) in a low dimensional space Z. More formally, let x n denote the -th example of class n in D tr T j . Then, the mean µ e n and variance σ e n of a Gaussian distribution over latent codes for class n are computed as where φ r , φ e are parameters for the relation net and encoder respectively. Intuitively, the three summations ensure that every example with class n in D tr T j is paired with every example from all classes n. Given µ e n , and σ e n , one can sample a latent code z n N (µ e n , diag(σ e2 n )) for class n, which serves as latent embedding of the task training data.
The decoder can then generate a task-specific initialization θ n for class n as follows. First, one computes a mean and variance for a Gaussian distribution using the latent code These are then used to sample initialization weights θ n N (µ d n , diag(σ d2 n )). The loss from the generated weights can then be propagated backwards to adjust the embedding space. Now, in practice, generating such high-dimensional set of parameters from a lowdimensional embedding can be quite problematic. Therefor, LEO uses pre-trained models, and only generates weights for the final layer, which limits the expressivity of the model.
A key advantage of LEO is that it optimizes in a lower-dimensional latent embedding space, which aids generalization performance. However, the approach is more complex than e.g. MAML (Finn et al., 2017), and its applicability is limited to few-shot learning settings.

Online MAML (FTML)
Online MAML  is an extension of MAML (Finn et al., 2017) to make it applicable to online learning settings (Anderson, 2008). In the online setting, we are presented with a sequence of tasks T t with corresponding loss functions {L Tt } T t=1 , for some potentially infinite time horizon T . The goal is to pick a sequence of parameters {θ t } T t=1 that performs well on the presented loss functions. This objective is captured by the Regret T over the entire sequence, which is defined by  as follows where θ are the initial model parameters (just as MAML), and θ t are parameters resulting from a one-step gradient update (starting from θ) on task t. Here, the left term reflects the updated parameters chosen by the agent (θ t ), whereas the right term presents the minimum obtainable loss (in hindsight) from a single fixed set of parameters θ. Note that this setup assumes that the agent can make updates to its chosen parameters (transform its initial choice at time t from θ t to θ t ).  propose FTML (Follow The Meta Leader), inspired by FTL (Follow The Leader) (Hannan, 1957;Kalai and Vempala, 2005), to minimize the regret. The basic idea is to set the parameters for the next time step (t + 1) equal to the best parameters in hindsight, i.e., The gradient to perform meta-updates is then given by where p t (T ) is a uniform distribution over tasks 1, ..., t (at time t).
Algorithm 4 contains the full pseudocode for FTML. In this algorithm, MetaUpdate performs a few (N meta ) meta-steps. In each meta-step, a task is sampled from B, together with train and test mini-batches to compute the gradient g t in Equation 37. The initialization θ is then updated (θ := θ − βg t (θ)), where β is the meta-learning rate. Note that the memory usage keeps increasing over time, as at every time step t, we append tasks to the buffer B, and keep task data sets in memory.

4:
Append T t to B

5:
while |D t | < N do 6: θ t+1 = θ t 15: end for In summary, Online MAML is a robust technique for online-learning . A downside of this approach is the computational costs that keep growing over time, as all encountered data are stored. Reducing these costs is a direction for future work. Also, one could experiment how well the approach works when more than one inner gradient update steps per task are used, as mentioned by . Grant et al. (2018) mold MAML into a probabilistic framework, such that a probability distribution over task-specific parameters θ j is learned, instead of a single one. In this way, multiple potential solutions can be obtained for a task. The resulting technique is called LLAMA (Laplace Approximation for Meta-Adaptation). Importantly, LLAMA is only developed for supervised learning settings.

LLAMA
A key observation is that a neural network f θ j , parameterized by updated parameters θ j (obtained from few gradient updates using D tr T j ), outputs class probabilities P (y i |x i , θ j ). To minimize the error on the test set D test T j , the model must output large probability scores for the true classes. This objective is captured in the maximum log likelihood loss function Simply put, if we see a task j as a probability distribution over examples p T j , we wish to maximize the probability that the model predicts the correct class y i , given an input x i . This can be done by plain gradient descent, as shown in Algorithm 5, where β is the meta-learning rate. Line 4 refers to ML-LAPLACE, which is a subroutine that computes task-specific updated parameters θ j , and estimates the negative log likelihood (loss function) which is used to update the initialization θ, as shown in Algorithm 6. Grant et al. (2018) approximated the quadratic curvature matrixĤ using K-FAC (Martens and Grosse, 2015). Now, the trick is that the initialization θ defines a distribution p(θ j |θ) over task-specific parameters θ j . This distribution was taken to be a diagonal Gaussian (Grant et al., 2018). Then, to sample solutions for a new task T j , one can simply generate possible solutions θ j from the learned Gaussian distribution.
Algorithm 5 LLAMA by Grant et al. (2018) 1: Initialize θ randomly 2: while not converged do 3: Sample a batch of J tasks: B = T 1 , ..., T J p(T ) 4: In short, LLAMA extends MAML in probabilistic fashion, such that one can obtain multiple solutions for a single task, instead of one. This does, however, increase the computational costs. On top of that, the used Laplace approximation (in ML-LAPLACE) can be quite inaccurate.

PLATIPUS
PLATIPUS  builds upon the probabilistic interpretation of LLAMA (Grant et al., 2018), but learns a probability distribution over initializations θ, instead of taskspecific parameters θ j . Thus, PLATIPUS allows one to sample an initialization θ p(θ), where θ m T j is the m-th particle obtained by training on the training set D tr of task T j . Yoon et al. (2018) proposed a new meta-loss to train BMAML, called the Chaser Loss. This loss relies on the insight that we want the approximated parameter distribution (obtained from the train set p n T j (θ T j |D tr , Θ 0 )) and true distribution p ∞ T j (θ T j |D tr ∪ D test ) to be close to each other (since the task is the same). Here, n denotes the number of SVGD steps, and Θ 0 is the set of initial particles, in similar fashion to the initial parameters θ seen by MAML. Since the true distribution is unknown, Yoon et al. (2018) approximate it by running SVGD for s additional steps, granting us the leader Θ n+s T j , where the s additional steps are performed on the combined train and test set. The intuition is that as the number of updates increases, the obtained distributions become more like the true one. Θ n T j in this context is called the chaser as it wants to get closer to the leader. The proposed meta-loss is then given by The full pseudocode of BMAML is shown in Algorithm 8. Here, Θ n T j (Θ 0 ) denotes the set of particles after n updates on task T j , and SG means "stop gradients" (we do not want the leader to depend on the initialization, as the leader must lead).

6:
Compute leader Θ n+s end for 8: , SG(Θ n+s T j (Θ 0 ))) 9: end for In summary, BMAML is a robust optimization-based meta-learning technique that can propose M potential solutions to a task. Additionally, it is applicable to reinforcement learning by using Stein Variational Policy Gradient instead of SVGD. A downside of this approach is that one has to keep M parameter sets in memory, which does not scale well. Reducing the memory costs is a direction for future work (Yoon et al., 2018). Furthermore, SVGD is sensitive to the selected kernel function, which was pre-defined in BMAML. However, Yoon et al. (2018) point out that it may be beneficial to learn the kernel function instead. This is another possibility for future research.

Simple Differentiable Solvers
Bertinetto et al. (2019) take a quite different approach. That is, they pick simple baselearners that have an analytical closed-form solution. The intuition is that the existence of a closed-form solution allows for good learning efficiency. They propose two techniques using this principle, namely R2-D2 (Ridge Regression Differentiable Discriminator), and L2-D2 (Logistic Regression Differentiable Discriminator). We cover both in turn.
Let g φ : X → R e be a pre-trained input embedding model (e.g. a CNN), which outputs embeddings with a dimensionality of e. Furthermore, assume that we use a linear predictor function f (g φ (x i )) = g φ (x i )W , where W is a e × o weight matrix, and o is the output dimensionality (of the label). When using (regularized) Ridge Regression (done by R2-D2), one uses the optimal W , i.e., where X ∈ R n×e is the input matrix, containing n rows (one for each embedded input g φ (x i )), Y ∈ R n×o is the output matrix with correct outputs corresponding to the inputs, and γ is a regularization term to prevent overfitting. Note that the analytical solution contains the term (X T X) ∈ R e×e , which is quadratic in the size of the embeddings. Since e can become quite large when using deep neural networks, Bertinetto et al. (2019) use Woodburry's identity where XX T ∈ R n×n is linear in the embedding size, and quadratic in the number of examples, which is more manageable in few-shot settings, where n is very small. To make predictions with this Ridge Regression based model, one can computê where α and β are hyperparameters of the base-learner that can be learned by the metalearner, and X test ∈ R m×e corresponds to the m test inputs of a given task. Thus, the meta-learner needs to learn α, β, γ, and φ (embedding weights of the CNN). The technique can also be applied to iterative solvers when the optimization steps are differentiable (Bertinetto et al., 2019). L2-D2 uses the Logistic Regression objective and Newton's method as solver. Outputs y ∈ {−1, +1} n are now binary. Let w denote a parameter row of our linear model (parameterized by W ). Then, the i-th iteration of Newton's method, updates w i as follows where µ i = σ(w T i−1 X), s i = µ i (1 − µ i ), z i = w T i−1 X + (y − µ i )/s i , and σ is the sigmoid function. Since the term X T diag(s i )X is a matrix of size e × e, and thus again quadratic in the embedding size, Woodburry's identity is also applied here to obtain making it quadratic in the input size, which is not a big problem since n is small in the few-shot setting. The main difference compared to R2-D2 is that the base-solver has to be run for multiple iterations to obtain W . In the few-shot setting, the base-level optimizers compute the weight matrix W for a given task T i . The obtained loss on the test set of a task L Dtest is then used to update the parameters φ of the input embedding function (e.g. CNN) and the hyperparameters of the base-learner. Lee et al. (2019) have done similar work to Bertinetto et al. (2019), but with linear Support Vector Machines (SVMs) as base-learner. Their approach is dubbed MetaOptNet, and achieved state-of-the-art performance on few-shot image classification.
In short, simple differentiable solvers are simple, reasonably fast in terms of computation time, but limited to few-shot learning settings. Investigating the use of other simple baselearners is a direction for future work.

Optimization-based Techniques, in conclusion
Optimization-based aim to learn new tasks quickly through (learned) optimization procedures. Note that this closely resembles base-level learning, which also occurs through optimization (e.g., gradient descent). However, in contrast to base-level techniques, optimizationbased meta-learners can learn the optimizer and/or are exposed to multiple tasks, which allows them to learn to learn new tasks quickly.
A key advantage of optimization-based approaches is that they can achieve better performance on wider task distributions than, e.g., model-based approaches . However, optimization-based techniques optimize a base-learner for every task that they are presented with and/or learn the optimization procedure, which is computationally expensive (Hospedales et al., 2020).
Optimization-based meta-learning is a very active area of research. We expect future work to be done in order to reduce the computational demands of these methods, and improve the solution quality and level of generalization. We think that benchmarking and reproducibility research will play an important role in these improvements.

Concluding Remarks
In this section, we give a helicopter view of all that we discussed, and the field of Deep Meta-Learning in general. We will also discuss challenges and future research.

Overview
In recent years, there has been a shift in focus in the broad meta-learning community. Traditional algorithm selection and hyperparameter optimization for classical machine learning techniques (e.g. Support Vector Machines, Logistic Regression, Random Forests, etc.) have made room for Deep Meta-Learning, or equivalently, the pursuit of self-improving neural networks that can leverage prior learning experience to learn new tasks more quickly. Instead of training a new model from scratch for different tasks, we can use the same (meta-learning) model across tasks. As such, meta-learning can widen the applicability of powerful deep learning techniques to domains where less data is available and computational resources are limited.
Deep Meta-Learning techniques are characterized by their meta-objective, which allows them to maximize performance across various tasks, instead of a single one, as is the case in base-level learning objectives. This meta-objective is reflected in the training procedure of meta-learning methods, as they learn on a set of different meta-training tasks. The few-shot setting lends itself nicely towards this end, as tasks consist of few data points. This makes it computationally feasible to train on many different tasks, and it allows us to evaluate whether a neural network can learn new concepts from few examples. Task construction for training and evaluation does require some special attention. That is, it has been shown beneficial to match training and test conditions (Vinyals et al., 2016), and perhaps train in a more difficult setting than the one that will be used for evaluation (Snell et al., 2017).
On a high level, there are three categories of Deep Meta-Learning techniques, namely i) metric-, ii) model-, and iii) optimization-based ones, which rely on i) computing input similarity, ii) task embeddings with states, and iii) task-specific updates, respectively. Each approach has strengths and weaknesses. Metric-learning techniques are simple and effective (Garcia and Bruna, 2017), but are not readily applicable outside of the supervised learning setting (Hospedales et al., 2020). Model-based techniques, on the other hand, can have very flexible internal dynamics, but lack generalization ability to more distant tasks than the ones used at meta-train time . Optimization-based approaches have shown greater generalizability, but are in general computationally expensive, as they optimize a base-learner for every task Hospedales et al., 2020). Table 2 provides a concise, tabular overview of these approaches. Many techniques have been proposed for each one of the categories, and the underlying ideas may vary greatly, even within the same category. Table 3, therefore, provides an overview of all methods and key ideas that we have discussed in this work, together with their applicability to supervised learning (SL) and reinforcement learning (RL) settings, key ideas, and benchmarks that were used for testing them.

Open Challenges and Future Work
Despite the great potential of Deep Meta-Learning techniques, there are still open challenges, which we discuss here.
To begin with, Deep Meta-Learning techniques can be susceptible to the memorization problem (meta-overfitting), where the neural network has memorized tasks seen at metatraining time, and fails to generalize to new tasks. More research is required to better understand this problem. Clever task design and meta-regularization may prove useful to avoid such problems (Yin et al., 2020).
Another problem is that most of the meta-learning techniques discussed in this work are evaluated on narrow benchmark sets. This means that the data that the meta-learner used for training are not too distant from the data used for evaluating its performance. As such, one may wonder how well these techniques are actually able to adapt to more distant tasks. Chen et al. (2019) showed that the ability to adapt to new tasks decreases as they become more distant from the tasks seen at training time. Moreover, a simple non-metalearning baseline (based on pre-training and fine-tuning) can outperform state-of-the-art meta-learning techniques when meta-test tasks come from a different data set than the one used for meta-training.
In reaction to these findings, Triantafillou et al. (2020) have recently proposed the Meta-Dataset benchmark, which consists of various previously used meta-learning benchmarks such as Omniglot (Lake et al., 2011) and ImageNet (Deng et al., 2009). This way, metalearning techniques can be evaluated in more challenging settings where tasks are diverse. Following Hospedales et al. (2020), we think that this new benchmark can prove to be a good mean towards the investigation and development of meta-learning algorithms for such challenging scenarios.
As mentioned earlier in this section, Deep Meta-Learning has the appealing prospect of widening the applicability of deep learning techniques to more real-world domains. For this, increasing the generalization ability of these techniques is very important. Additionally, the computational costs associated with the deployment of meta-learning techniques should be small. While these techniques can learn new tasks quickly, meta-training can be quite computationally expensive. Thus, decreasing the required computation time and memory costs of Deep Meta-Learning techniques remains an open challenge.
Some real world problems demand systems that can perform well in online, or active learning settings. The investigation of Deep Meta-Learning in these settings Yoon et al., 2018;Munkhdalai and Yu, 2017;Vuorio et al., 2018) remains an important direction for future work.
Yet another direction for future research is the creation of compositional Deep Meta-Learning systems, which instead of learning flat and associative functions x → y, organize knowledge in a compositional manner. This would allow them to decompose an input x into several (already learned) components c 1 (x), ..., c n (x), which in turn could help the performance in low-data regimes (Tokmakov et al., 2019).
The question has been raised whether contemporary Deep Meta-Learning techniques actually learn how to perform rapid learning, or simply learn a set of robust high-level features, which can be (re)used for many (new) tasks. Raghu et al. (2020) investigated this question for the most popular Deep Meta-Learning technique MAML, and found that it largely relies on feature reuse. It would be interesting to see whether we can develop techniques that rely more on fast learning, and what the effect would be on performance.