Goal-conditioned Offline Reinforcement Learning through State Space Partitioning

Offline reinforcement learning (RL) aims to infer sequential decision policies using only offline datasets. This is a particularly difficult setup, especially when learning to achieve multiple different goals or outcomes under a given scenario with only sparse rewards. For offline learning of goal-conditioned policies via supervised learning, previous work has shown that an advantage weighted log-likelihood loss guarantees monotonic policy improvement. In this work we argue that, despite its benefits, this approach is still insufficient to fully address the distribution shift and multi-modality problems. The latter is particularly severe in long-horizon tasks where finding a unique and optimal policy that goes from a state to the desired goal is challenging as there may be multiple and potentially conflicting solutions. To tackle these challenges, we propose a complementary advantage-based weighting scheme that introduces an additional source of inductive bias: given a value-based partitioning of the state space, the contribution of actions expected to lead to target regions that are easier to reach, compared to the final goal, is further increased. Empirically, we demonstrate that the proposed approach, Dual-Advantage Weighted Offline Goal-conditioned RL (DAWOG), outperforms several competing offline algorithms in commonly used benchmarks. Analytically, we offer a guarantee that the learnt policy is never worse than the underlying behaviour policy.


Introduction
Goal-conditioned reinforcement learning (GCRL) aims to learn policies capable of reaching a wide range of distinct goals, effectively creating a vast repertoire of skills [1][2][3].When extensive historical training datasets are available, it becomes possible to infer decision policies that surpass the unknown behavior policy (i.e., the policy that generated the data) in an offline manner, without necessitating further interactions with the environment [4][5][6].A primary challenge in GCRL lies in the reward signal's sparsity: an agent only receives a reward when it achieves the goal, providing a weak learning signal.This becomes especially challenging in long-horizon problems where reaching the goals by chance alone is difficult.
In an offline setting, the challenge of learning with sparse rewards becomes even more complex due to the inability to explore beyond the already observed states and actions.When the historical data comprises expert demonstrations, imitation learning presents a straightforward approach to offline GCRL [7,8]: in goal-conditioned supervised learning (GCSL), offline trajectories are iteratively relabeled, and a policy learns to imitate them directly.Furthermore, GCSL's objective lower bounds a function of the original GCRL objective [9].However, in practice, the available demonstrations can often contain suboptimal examples, leading to inferior policies.A simple yet effective solution involves re-weighting the actions during policy training within a likelihood maximization framework.A parameterized advantage function is employed to estimate the expected quality of an action conditioned on a target goal, so that higher-quality actions receive higher weights (Yang et al., 2022).This method is known as goal-conditioned exponential advantage weighting (GEAW).
Although GEAW is effective, we contend in this paper that it grapples with the pervasive multi-modality issue, especially in tasks with extended horizons.The challenge lies in pinpointing an optimal policy to achieve any set goal, given the multiple, sometimes conflicting, paths leading to that goal.While a goal-conditioned advantage function emphasizes actions likely to achieve the goal during training, we believe that introducing an extra layer of inductive bias can offer a shorter learning horizon, a robust learning signal, and more achievable objectives.This, in turn, aids the policy in discerning and adopting the best short-term trajectories amidst conflicting ones.
We propose a complementary advantage weighting scheme that also utilizes the goal-conditioned value function.This provides additional guidance to address multi-modality.During training, the state space is divided into a fixed number of regions, ensuring that all states within the same region have approximately the same goal-conditioned value.These regions are then ranked from the lowest to the highest value.Given the current state, the policy is encouraged to reach the immediately higher-ranking region, relative to the state's present region, in the fewest steps possible.This target region offers a state-dependent, short-horizon objective that is easier to achieve compared to the final goal, leading to generally shorter successful trajectories.Our proposed algorithm, Dual-Advantage Weighted Offline GCRL (DAWOG), Fig. 1: Visualization of trajectories (in blue) across various maze environments.These trajectories are produced by policies trained through supervised learning using different action weighting schemes: no action weighting (left), goal-conditioned advantage weighting (middle), and dual-advantage weighting (right).The task involves an agent (represented as an ant) navigating from a starting position (orange circle) to an end goal (red circle).Branching points near the circles highlight areas where the multi-modality issue is pronounced.Our proposed dual-advantage weighting scheme significantly mitigates this issue.The green circle indicates the optimal path, while the red circle marks a suboptimal route.
seamlessly integrates the original goal-conditioned advantage weight with the new target-based advantage to effectively address the multi-modality issue.
A prime example is showcased in Figure 1, depicting the performance of three pre-trained policies in maze-based navigation tasks [10].A quadruped robot has been trained to navigate these mazes.It's tasked with reaching new, unseen goals (red circles) from a starting point (orange circles).These policies were trained via supervised learning: a baseline with no action weighting (left), goal-conditioned advantage weighting (middle), and our proposed dualadvantage weighting (right).While the goal-conditioned advantage weighting often outperforms the baseline, it can occasionally guide the robot into suboptimal areas, causing delays before redirecting towards the goal.A closer Fig. 2: Comparison of normalized weights from various weighting schemes.Referring to Figure 1, the red circles demarcate optimal and sub-optimal areas given the target.The histograms in this figure illustrate that the dualadvantage scheme more effectively differentiates states in the optimal area from those in the sub-optimal area, allocating higher weights to the 'optimal' area states.look, as shown in Figure 2, indicates that dual-advantage weighting better distinguishes goal-aligned actions from sub-optimal ones by assigning them different weights.Consequently, our dual-advantage weighting approach mitigates the multi-modality challenge, resulting in policies that offer more direct and efficient routes to the goal.
In this work, we address the challenges of multi-modality in goalconditioned offline RL, introducing a novel approach to tackle them.The main contributions of our paper are:

Related work
In this section, we offer a brief overview of methodologically related approaches.In goal-conditioned RL (GCRL) In goal-conditioned RL (GCRL), one of the main challenges is the sparsity of the reward signal.An effective solution is hindsight experience replay (HER) [3], which relabels failed rollouts that have not been able to reach the original goals and treats them as successful examples for different goals thus effectively learning from failures.HER has been extended to solve different challenging tasks in synergy with other learning techniques, such as curriculum learning [11], model-based goal generation [12][13][14][15], and generative adversarial learning [16,17].In the offline setting, GCRL aims to learn goal-conditioned policies using only a fixed dataset.The simplest solution has been to adapt standard offline reinforcement learning algorithms [18,19] by simply concatenating the state and the goal as a new state.Chebotar et al. [6] propose goal-conditioned conservative Q-learning and goal-chaining to prevent value over-estimation and increase the diversity of the goal.Some of the previous works design offline GCRL algorithms from the perspective of state-occupancy matching [4].Mezghani et al. [5] propose a self-supervised reward shaping method to facilitate offline GCRL.
Our work is most related to goal-conditioned imitation learning (GCIL).Emmons et al. [8] study the importance of concatenating goals with states showing its effectiveness in various environments.Ding et al. [20] extend generative adversarial imitation learning [21] to goal-conditioned settings.Ghosh et al. [7] extend behaviors cloning [22] to goal-conditioned settings and propose goal-conditioned supervised learning (GCSL) to imitate relabeled offline trajectories.Yang et al. [9] connect GCSL to offline GCRL, and show that the objective function in GCSL is a lower bound of a function of the original GCRL objective.They propose the GEAW algorithm, which re-weights the offline data based on advantage function similarly to [23,24].Additionally, Yang et al. [9] identify the multi-modality challenges in GEAW and introduce the best-advantage weight (BAW) to exclude state-actions with low advantage during the learning process.In parallel, our DAWOG was developed to address this very challenge, offering a novel advantage-based action re-weighting approach.
Some connections can also be found with goal-based hierarchical reinforcement learning methods [14,[25][26][27][28].These works feature a high-level model capable of predicting a sequence of intermediate sub-goals and learn lowlevel policies to achieve them.Instead of learning to reach a specific sub-goals, our policy learns to reach an entire sub-region of the state space containing states that are equally valuable and provide an incremental improvement towards the final goal.
Lastly, there have been other applications of state space partitioning in reinforcement learning, such as facilitating exploration and accelerating policy learning in online settings [29][30][31][32].Ghosh et al. [33] demonstrate that learning a policy confined to a state partition instead of the whole space can lead to lowvariance gradient estimates for learning value functions.In their work, states are partitioned using K-means to learn an ensemble of locally optimal policies, which are then progressively merged into a single, better-performing policy.Instead of partitioning states based on their geometric proximity, we partition states according to the proximity of their corresponding goal-conditioned values.We then use this information to define an auxiliary reward function and, consequently, a region-based advantage function.

Preliminaries
Goal-conditioned MDPs.Goal-conditioned tasks are usually modeled as Goal-Conditioned Markov Decision Processes (GCMDP), denoted by a tuple < S, A, G, P, R > where S, A, and G are the state, action and goal space, respectively.For each state s ∈ S, there is a corresponding achieved goal, ϕ(s) ∈ G, where ϕ : S → G [1].At a given state s t , an action a t taken towards a desired goal g results in a visited next state s t+1 according to the environment's transition dynamics, P (s t+1 | s t , a t ).The environment then provides a reward, r t = R(s t+1 , g), which is non-zero only when the goal has been reached, i.e., ≤ threshold, 0, otherwise. ( Offline Goal-conditioned RL.In offline GCRL, the agent aims to learn a goal-conditioned policy, π : S × G → A, using an offline dataset containing previously logged trajectories that might be generated by any number of unknown behaviors policies.The objective is to maximize the expected and discounted cumulative returns, where γ ∈ (0, 1] is a discount factor, P g is the distribution of the goals, P 0 is the distribution of the initial state, and T corresponds to the time step at which an episode ends, i.e., either the goal has been achieved or timeout has been reached.Goal-conditioned Value Functions.A goal-conditioned state-action value function [34] quantifies the value of an action a taken from a state s conditioned on a goal g using the sparse rewards of Eq. 1, where E π [•] denotes the expectation taken with respect to a t ∼ π(• | s t , g) and s t+1 ∼ P (• | s t , a t ).Analogously, the goal-conditioned state value function quantifies the value of a state s when trying to reach g, The goal-conditioned advantage function, then quantifies how advantageous it is to take a specific action a in state s towards g over taking the actions sampled from π(• | s, g) [9].
Goal-conditioned Supervised Learning (GCSL).GCSL [7] relabels the desired goal in each data tuple (s t , a t , g) with the goal achieved henceforth in the trajectory to increase the diversity and quality of the data [3,35].The relabeled dataset is denoted as D R = {(s t , a t , g = ϕ(s i )) | T ≥ i > t ≥ 0}.GCSL learns a policy that mimics the relabeled transitions by maximizing Yang et al. [9] have connected GCSL to GCRL and demonstrated that J GCSL lower bounds 1 T log J GCRL .Goal-conditioned Exponential Advantage Weighting (GEAW).GEAW, as discussed in [9,24], extends GCSL by incorporating a goalconditioned exponential advantage as the weight for Eq. 6.Its design ensures that samples with higher advantages receive larger weights and vice versa.Specifically, GEAW trains a policy that emulates relabeled transitions, but with varied weights: Here, exp clip (•) clips values within the range (0, M ] to ensure numerical stability.This weighting approach has been demonstrated as a closed-form solution to an offline RL problem, guaranteeing that the resultant policy aligns closely with the behavior policy [24].

Methods
In this section, we formally present the proposed methodology and analytical results.First, we introduce a notion of target region advantage function in Section 4.1, which we use to develop the learning algorithm in Section 4.2.In Section 4.3 we provide a theoretical analysis offering guarantees that DAWOG learns a policy that is never worse than the underlying behaviors policy.

Target region advantage function
For any state s ∈ S and goal g ∈ G, the domain of the goal-conditioned value function in Eq. 4 is the unit interval due to the binary nature of the reward function in Eq. 1.Given a positive integer K, we partition [0, 1] into K equally sized intervals, {β i } i=1,...,K .For any goal g, this partition induces a corresponding partition of the state space.
Definition 1 (Goal-conditioned State Space Partition) For a fixed desired goal g ∈ G, the state space is partitioned into K equally sized regions according to V π (•, g).
The k th region, notated as B k (g), contains all states whose goal-conditioned values are within β k , i.e., Fig. 3: Illustration of the two advantage functions used by DAWOG for a simple navigation task.First, a goal-conditioned advantage is learned using only relabeled offline data.Then, a target-region advantage is obtained by partitioning the states according to their goal-conditioned value function, identifying a target region, and rewarding actions leading to this region in the smallest possible number of steps.DAWOG updates the policy to imitate the offline data through an exponential weighting factor that depends on both advantages.
Our ultimate objective is to up-weight actions taken in a state s t ∈ B k (g) that are likely to lead to a region only marginally better (but never worse) than B k (g) as rapidly as possible.
Definition 2 (Target Region) For s ∈ B k (g), the mapping b(s, g) : S × G → {1, . . ., K} returns the correct index k.The goal-conditioned target region is defined as G(s, g) = B min{b(s,g)+1,K} (g), (9) which is the set of states whose goal-conditioned value is not less than the states in the current region.For s ∈ B k (g), G(s, g) is the current region B k (g) if and only if k = K.
We now introduce two target region value functions.
Definition 3 (Target Region Value Functions) For a state s, action a ∈ A, and the target region G(s, g), we define a target region V-function and a target region Qfunction based on an auxiliary reward function that returns a non-zero reward only when the next state belongs to the target region, i.e., The target region Q-value function is where T corresponds to the time step at which the target region is achieved or timeout is reached, Eπ[•] denotes the expectation taken with respect to the policy a t ∼ π(• | s t , g) and the transition dynamics s t+1 ∼ P (• | s t , a t ).The target region Q-function estimates the expected cumulative return when starting in s t , taking an action a t , and then following the policy π, based on the auxiliary reward.The discount factor γ reduces the contribution of delayed target achievements.Analogously, the target region value function is defined as and quantifies the quality of a state s according to the same criterion.
Using the above value functions, we are in a position to introduce the corresponding target region advantage function.

Definition 4 (Target Region Advantage Function) The target region-based advantage function is defined as
It estimates the advantage of action a towards the target region in terms of the cumulative return by taking a in state s and following the policy π thereafter, compared to taking actions sampled from the policy.

The DAWOG algorithm
The proposed DAWOG belongs to the family of WGCSL algorithms, i.e. it is designed to optimize the following objective function where the role of w t is to re-weight each action's contribution to the loss.In DAWOG, w t is an exponential weight of form where π b is the underlying behavior policy that generate the relabeled dataset D R .The contribution of the two advantage functions, Ãπ b (s t , a t , g) and Ãπ b (s t , a t , G(s t , g)), is controlled by positive scalars, β and β, respectively.However, empirically, we have found that using a single shared parameter generally performs well across the tasks we have considered (see Section 5.5).The clipped exponential, exp clip (•), is used for numerical stability and keeps the values within the (0, M ] range, for a given M > 0 threshold.The algorithm combines the originally proposed goal-conditioned advantage [9] with the novel target region advantage.The former ensures that actions likely to lead to the goal are up-weighted.However, when the goal is still far, there may still be several possible ways to reach it, resulting in a wide variety of favorable actions.The target region advantage function provides additional guidance by further increasing the contribution of actions expected to lead to a higher-valued sub-region of the state space as rapidly as possible.Both A π b (s t , a t , g) and Ãπ b (s t , a t , G(s t , g)) are beneficial in a complementary fashion: whereas the former is more concerned with long-term gains, which are more difficult and uncertain, the latter is more concerned with short-term gains, which are easier to achieve.As such, these two factors are complementary and their combined effect plays an important role in the algorithm's final performance (see Section 5.5).An illustration of the dual-advantage weighting scheme is shown in Fig. 3.
In the remainder, we explain the entire training procedure.The advantage In practice, the goal-conditioned V-function is approximated by a deep neural network with parameter ψ 1 , which is learned by minimizing the temporal difference (TD) error [36]: where y t is the target value given by Here d(s t+1 , g)) indicates whether the state s t+1 has reached the goal g.The parameter vector ψ − 1 is a slowly moving average of ψ 1 to stabilize training [37].Analogously, the target region advantage function is estimated by where the target region V-function is approximated with a deep neural network parameterized with ψ 2 .The relevant loss function is where the target value is and d(s t+1 , G(s t , g))) indicates whether the state s t+1 has reached the target region G(s t , g). ψ − 2 is a slowly moving average of ψ 2 .The full procedure is presented in Algorithm 1 where the two value functions are jointly optimized and contribute to optimizing Eq. 14.

Policy improvement guarantees
In this section, we demonstrate that our learned policy is never worse than the underlying behavior policy π b that generates the relabeled data.First, we express the policy learned by our algorithm in an equivalent form, as follows.
Proposition 1 DAWOG learns a policy π θ to minimize the KL-divergence from where w = βA π b (s, a, g)+ β Ãπ b (s, a, G(s, g)), G(s, g) is the target region, and N (s, g) is a normalizing factor to ensuring that a∈A πdual (a | s, g) = 1.
Proof According to Eq. 14, DAWOG maximizes the following objective with the policy parameterized by θ: J(θ) reaches its maximum when □ Then, we propose Proposition 2 to show the condition for policy improvement.
Proposition 2 [9,24] Suppose two policies π 1 and π 2 satisfy where h 1 (•) is a monotonically increasing function, and h 2 (s, g, •) is monotonically increasing for any fixed s and g.Then we have That is, π 2 is uniformly as good as or better than π 1 .
We want to leverage this result to demonstrate that V πdual (s, g) ≥ V π b (s, g) for any state s and goal g.Firstly, we need to obtain a monotonically increasing function h 1 (•).This is achieved by taking the logarithm of the both sides of Eq. 22, i.e., so that h 1 (•) = log(•).The following proposition establishes that we also have a function h 2 (s, g, A π b (s, a, g)) = βA π b (s, a, g) + β Ãπ b (s, a, G(s, g)) + N (s, g), which is monotonically increasing for any fixed s and g.Since β, β ≥ 0 and N (s, g) is independent of the action, it is equivalent to prove that for any fixed s and g, there exists a monotonically increasing function l satisfying Proposition 3 Given fixed s, g and the target region G(s, g), the goal-conditioned advantage function A π and the target region-conditioned advantage function Ãπ satisfy l(s, g, A π (s, a, g)) = Ãπ (s, a, G(s, g)), where l(s, g, •) is monotonically increasing for any fixed s and g.
Proof By the definition of monotonically increasing function, if for all a ′ , a ′′ ∈ A such that A π (s, a ′ , g) ≥ A π (s, a ′′ , g) and we can reach Ãπ (s, a ′ , G(s, g)) ≥ Ãπ (s, a ′′ , G(s, g)), then the proposition can be proved.We start by having any two actions a ′ , a ′′ ∈ A such that By adding V π (s, g) on both sides, the inequality becomes By Definition 3, the goal-conditioned Q-function can be written as where τ i represents a trajectory: s t , a t , r i t , s i t+1 , a i t+1 , r i t+1 , . . ., s i T R t,τi = r i t + γr i t+1 + . . .
s i tar corresponds to the state where τ i gets into the target region, t i tar is the corresponding time step.Because the reward is zero until the desired goal is reached, Eq. 30 can be written as Similarly, According to Eq.29 and Eq.32, we have Given the valued-based partitioning of the state space, we assume that the goalconditioned values of states in the target region are sufficiently close such that ∀i Then, Eq.34 can be approximated as Removing v on both sides of Eq.35 and according to Eq.33, we have Qπ (s, a ′ , G(s, g)) ≥ Qπ (s, a ′′ , G(s, g)).

□
Since GCSL aims to mimic the underlying behavior policy using maximum likelihood estimation, DAWOG inherently offers guarantees in relation to GCSL.

Experimental results
In this section, we examine DAWOG's performance relative to existing state-ofthe-art algorithms using environments of increasing complexity.The remainder of this section is organized as follows.The benchmark tasks and datasets are presented in Section 5.1.The implementation details are provided in Section 5.2.A list of competing methods is presented in Section 5.3, and the comparative performance results are found in Section 5.4.Here, we also qualitatively inspect the policies learned by DAWOG in an attempt to characterize the improvements that can be achieved over other methods.Section 5.5 presents extensive ablation studies to appreciate the relative contribution of the different advantage weighting factors.Finally, in Section 5.6, we study how the dual-advantage weight depends on its hyperparameters.Goal-conditioned Offline RL through State Space Partitioning

Grid World
We designed two 16 × 16 grid worlds to assess the performance on a simple navigation task.From its starting position on the grid, an agent needs to reach a goal that has been randomly placed in one of the available cells.Only four actions are available to move left, right, up, and down.The agent accrues a positive reward when it reaches the cell containing the goal.To generate the benchmark dataset, we trained a Deep Q-learning algorithm [37], whose replay buffer, containing 4, 000 trajectories of 50 time steps, was used as the benchmark dataset.

AntMaze navigation
The AntMaze suite used in our experiment is obtained from the D4RL benchmark [10], which has been widely adopted by offline GCRL studies [4,8,25].The task requires to control an 8-DoF quadruped robot that moves in a maze and aims to reach a target location within an allowed maximum of 1, 000 steps.The suite contains three kinds of different maze layouts: umaze (a U-shape wall in the middle), medium and large, and provides three training datasets.The datasets differ in the way the starting and goal positions of each trajectory were generated: in umaze the starting position is fixed and the goal position is sampled within a fixed-position small region; in diverse the starting and goal positions are randomly sampled in the whole environment; finally, in play, the starting and goal positions are randomly sampled within hand-picked regions.In the sparse-reward environment, the agent obtains a reward only when it reaches the target goal.We use a normalized score as originally proposed in [10], i.e., s n = 100 • s − s r s e − s r where s is the unnormalized score, s r is a score obtained using a random policy and s e is the score obtained using an expert policy.
In our evaluation phase, the policy is tested online.The agent's starting position is always fixed, and the goal position is generated using one of the following methods: • fixed goal : the goal position is sampled within a small and fixed region in a corner of the maze, as in previous work [4,8,25]; • diverse goal : the goal position is uniformly sampled over the entire region.
This evaluation scheme has not been adopted in previous works, but helps assess the policy's generalization ability in goal-conditioned settings.

Gym robotics
Gym Robotics [2] is a popular robotic suite used in both online and offline GCRL studies [4,9].The agent to be controlled is a 7-DoF robotic arm, and several tasks are available: in FetchReach, the arm needs to touch a desired location; in FetchPush, the arm needs to move a cube to a desired location; in FetchPickAndPlace a cube needs to be picked up and moved to a desired location; finally, in FetchSlide, the arm needs to slide a cube to a desired location.Each environment returns a reward of one when the task has been completed within an allowed time horizon of 50 time steps.For this suite, we use the expert offline dataset provided by [9].The dataset for FetchReach contains 1 × 10 5 time steps whereas all the other datasets contain 2 × 10 6 steps.The datasets are collected using a pre-trained policy using DDPG and hindsight relabeling [3,38]; the actions from the policy were perturbed by adding Gaussian noise with zero mean and 0.2 standard deviation.

Implementation details
DAWOG's training procedure is shown in Algorithm 1.In our implementation, for continuous control tasks, we use a Gaussian policy following previous recommendations [39].When interacting with the environment, the actions are sampled from the above distribution.All the neural networks used in DAWOG are 3-layer multi-layer perceptrons with 512 units in each layer and ReLU activation functions.The parameters are trained using the Adam optimizer [40] with a learning rate 1 × 10 −3 .The training batch size is 512 across all networks.To represent G(s, g) we use a K-dimensional one-hot encoding vector where the i th position is non-zero for the target region and zero everywhere else along with the goal g.Four hyper-parameters need to be chosen: the state partition size, K, the two coefficients controlling the relative contribution of the two advantage functions, β and β, and the clipping bound, M .In our experiments, we use K = 20 for umaze and medium maze, K = 50 for large maze, and K = 10 for all other tasks.In all our experiments, we use fixed values of β = β = 10.The clipping bound is always kept at M = 10.

Competing methods
Several competing algorithms have been selected for comparison with DAWOG, including offline DRL methods that were not originally proposed for goal-conditioned tasks and required some minor adaptation.In the remainder of this Section, the nomenclature 'g-' indicates that the original algorithm has been implemented to operate in a goal-conditioned setting by concatenating the state and the goal as a new state and with hindsight relabeling.
In all experiments, we independently optimize the hyper-parameters for every algorithm.
The first category of algorithms comprises regression-based methods that imitate the relabeled offline dataset using various weighting strategies: • GCSL [7] imitates the relabeled transitions without any weighting strategies.
• GEAW [9,24] uses goal-conditioned advantage to weight the actions in the offline data.
Finally, we include a hierarchical learning method, IRIS [43], which employs a low-level imitation learning policy to reach sub-goals commanded by a highlevel goal planner.

Performance comparisons and analysis
To appreciate how state space partitioning works, we provide examples of valued-based partition for the grid worlds environments in Figure 4.In these cases, the environmental states simply correspond to locations in the grid.
Here, the state space is divided with darker colors indicating higher values.As expected, these figures clearly show that states can be ordered based on the estimated value function, and that higher-valued states are those close to the goal.We also report the average return across five runs in Table 3 where we compare DAWOG against GCSL and GEAW -two algorithms that are easily adapted for discrete action spaces.Table 1 presents the results for the Gym Robotics suite.We detail the average return and the standard deviation for each algorithm, derived from four independent runs with unique seeds.As can be seen from the results, most of the competing algorithms reach a comparable performance with DAWOG.However, DAWOG generally achieves higher scores and the most stable performance in different tasks.
Table 2 displays similar findings for the AntMaze suite.In these more complex, long-horizon environments, DAWOG consistently surpasses all baseline algorithms.In scenarios with diverse goals, while all algorithms exhibit lower performance, DAWOG still manages to secure the highest average score.This setup requires better generalization performance given that the test goals are sampled from every position within the maze.
To gain an appreciation for the benefits introduced by the target region approach, in Figure 1 we visualize 100 trajectories realized by three different policies for AntMaze tasks: dual-advantage weighting (DAWOG), equally-weighting and goal-conditioned advantage weighting.The trajectories generated by equally-weighting occasionally lead to regions in the maze that should have been avoided, which results in sub-optimal solutions.The policy from goal-conditioned advantage weighting is occasionally less prone to making Fig. 4: An illustration of goal-conditioned state space partitions for two simple Grid World navigation tasks.In each instance, the desired goal is represented by a red circle.In these environments, each state simply corresponds to a position on the grid and, in the top row, is color-coded according to its goalconditional value.In the lower row, states sharing similar values have been merged to form a partition.For any given state, the proposed target region advantage up-weights actions that move the agent directly towards a neighboring region with higher-value.the same mistakes, although it still suffers from the multi-modality problem.This can be appreciated, for instance, by observing the antmaze-medium case.
In contrast, DAWOG is generally able to reach the goal with fewer detours, hence in a shorter amount of time.

Further studies
In this Section we take a closer look at how the two advantage-based weights featuring in Eq. 15 perform, both separately and jointly taken, when used in the loss of Eq. 14.We compare learning curves, region occupancy times (i.e.time spent in each region of the state space whilst reaching the goal), and potential overestimation biases.We also study the effects of using different target region and using entropy to regularize policy learning.

Learning curves
In the AntMaze environments, we train DAWOG using no advantage (β = β = 0), only the goal-conditioned advantage (β = 10, β = 0), only the target region advantage (β = 0, β = 10), and the proposed dual-advantage (β = β = 10).Over the course of 50, 000 gradient updates, Figure 5 clearly illustrates the distinct learning trajectories of each algorithm.Both the goal-advantage and region-based advantage perform better than using no advantage, and their performance is generally comparable, with the latter often achieving higher normalized returns.Combining the two advantages leads to significantly higher returns than any advantage weight individually taken.

Region occupancy times
In this study, we set out to confirm that the dual-advantage weighting scheme results in a policy favoring actions leading to the next higher ranking target region rapidly, i.e. by reducing the occupancy time in each region.Using the AntMaze environments, Figure 6 shows the average time spent in a region of the state space partitioned with K = 50 regions.As shown here, the dualadvantage weighting allows the agent to reach the target (next) region in fewer Fig. 6: Average time spent in a region of the state space before moving on to the higher-ranking region (K = 50) using a goal-conditioned value function for state partitioning.The y-axis indicates the average number of time steps (in log scale) spent in a region.The dual-advantage weighting scheme allows the agent to reach each subsequent target region more rapidly compared to the goal-conditioned advantage alone, which results in overall shorter time spent to reach the final goal.
time steps compared to the goal-conditioned advantage alone.As the episode progresses, the ant's remaining time to complete its task diminishes, influencing its decision-making process.Hence, as the ant progressively moves to higher ranking regions closer to the goal, the occupancy times decrease.

Over-estimation bias
We assess the extent of potential overestimation errors affecting the two advantage weighting factors used in our method (see Eq. 15).This is done by studying the error that occurred in the estimation of the corresponding V-functions (see Eq. 16 and Eq. 19).Given a state s and goal g, we compute the goal-conditioned V-value estimation error as V ψ1 (s, g) − V π (s, g), where V ψ1 (s, g) is the parameterized function learned by our algorithm and V π (s, g) is an unbiased Monte-Carlo estimate of the goal-conditioned Vfunction's true value [36].Since V π (s, g) represents the expected discounted return obtained by the underlying behavior policy that generates the relabeled data, we use a policy pre-trained with the GCSL algorithm to generate 1, 000 trajectories to calculate the Monte-Carlo estimate (i.e. the average discounted return).Analogously, the target region V-value estimation error is Ṽψ2 (s, g, G(s, g)) − Ṽ π (s, g, G(s, g)).We use the learned target region Vvalue function to calculate Ṽψ2 (s, g, G(s, g)), and Monte-Carlo estimation to approximate Ṽ π (s, g, G(s, g)).
Our investigation focuses on the Grid World environment, specifically analyzing two distinct layouts: grid-umaze and grid-wall.For each layout, we randomly sample s and g uniformly within the entire maze and ensure that the number of regions separating them is uniformly distributed in {1, . . ., K}.Then, for each k in that range: 1) 1, 000 goal positions are sampled randomly within the whole layout; 2) for each goal position, the state space is partitioned according to V ψ (•, g); and 3) a state is sampled randomly within the corresponding region.Since there may exist regions without any states, the observed total number of regions is smaller than K = 10.The resulting estimation errors are shown in Fig. 7.As can be seen here, both the mean and standard deviation of the Ṽ -value errors are consistently smaller than those corresponding to the V -value errors.This indicates the target region value function is more robust against over-estimation bias, which may help improve the generalization performance in out-of-distribution settings.

Effects of different target regions
As outlined in Definition 2, the target region comprises states with goalconditioned values marginally exceeding the current state's value.Within DAWOG, when the current region is denoted as B k (g), the subsequent target region becomes B k+1 (g).Yet, for immediate benefits, regions beyond B k+1 (g) can also be contemplated.This section delves into the implications of varying target regions.Figure 8 demonstrates that targeting the immediate neighboring region with higher values using DAWOG consistently yields superior performance compared to other configurations with varied target regions.There is only one instance (antmaze-large-play) where targeting a slightly further region yields a marginally better outcome.Nonetheless, as a general Fig. 8: DAWOG with different target regions.The target region is set to be the next (DAWOG), 3, 5, and 10 after the current region.Each curve in the graph is generated from four distinct runs, each initiated with a different random seed.trend, the performance advantage diminishes as the target region becomes increasingly distant from the current region.

Policy learning with entropy regularization
The concept of target region advantage can be perceived as a regularization technique.In this analysis, we juxtapose DAWOG with a version of GEAW enhanced by entropy regularization.The refined objective function is expressed as: where H(π(• | s t , g)) is defined as 1  2 ln 2πeσ 2 , with σ representing the standard deviation of the Gaussian distribution conditioned as π(• | s t , g).Initially, we set α values from the set 0, 0.01, 0.1.Subsequently, we employ a dynamic approach for α, allowing it to decrease progressively from 0.1 to 0.01.The outcomes of these experiments are depicted in Figure 9.
Although strategically adjusted regularization can slightly improve the GEAW baseline, it is evident that DAWOG maintains a consistent edge in performance.This superior performance of DAWOG can be attributed to the unique manner in which it introduces short-term goals.

Sensitivity to hyperparameters
Lastly, we examine the impact of the number of partitions (K) and the coefficients β and β, which control the relative contribution of the two advantage functions on DAWOG's overall performance.In the AntMaze task, we report the distribution of normalized returns as K increases.Figure 10 reveals that an optimal parameter yielding high average returns with low variance often depends on the specific task and is likely influenced by the environment's complexity.
The performance of DAWOG, as depicted in Figure 11, varies in response to different settings of β and β, highlighting the algorithm's sensitivity to these parameters.The plot demonstrates some minimal sensitivity to various parameter combinations but also exhibits a good degree of symmetry.In all our experiments, including those in Tables 2 and 1, we opted for a shared value, β = β = 10, rather than optimizing each parameter combination for each task.This choice suggests that strong performance can be achieved even without extensive hyperparameter optimization.

Discussion and conclusions
Our study introduces a novel dual-advantage weighting scheme in supervised learning, specifically designed to tackle the complexities of multi-modality and distribution shifts in goal-conditioned offline reinforcement learning (GCRL).We used 4 runs with different seeds.Best performance (highest average returns and lowest variability) was consistently achieved across all settings with around K = 50 equally sized target regions.
The corresponding algorithm, DAWOG (Dual-Advantage Weighting for Offline Goal-conditioned learning), prioritizes actions that lead to higher-reward regions, introducing an additional source of inductive bias and enhancing the ability to generalize learned skills to novel goals.Theoretical support is provided by demonstrating that the derived policy is never inferior to the underlying behavior policy.Empirical evidence shows that DAWOG learns highly competitive policies and surpasses several existing offline algorithms on demanding goal-conditioned tasks.Significantly, the ease of implementing and training DAWOG underscores its practical value, contributing substantially to the evolving understanding of offline GCRL and its interplay with goal-conditioned supervised learning (GCSL).
The potential for future research in refining and expanding upon our proposed approach is multifaceted.Firstly, our current method partitions states into equally-sized bins for the value function.Implementing an adaptive partitioning technique that does not assume equal bin sizes could provide finer control over state partition shapes (e.g., merging smaller regions into larger ones), potentially leading to further performance improvements.
Secondly, considering DAWOG's effectiveness in alleviating the multimodality problem in offline GCRL, it may also benefit other GCRL approaches beyond advantage-weighted GCSL.Specifically, our method could extend to actor-critic-based offline GCRL, such as TD3-BC [19] , which introduces a behavior cloning-based regularizer into the TD3 algorithm [42] to keep the policy closer to actions experienced in historical data.The dualadvantage weighting scheme could offer an alternative direction for developing a TD3-based algorithm for offline GCRL.15.The performance metric is the average return across 5 runs.To produce the results presented in Table 2, we used a (potentially sub-optimal) fixed parameter combination: β = 10 and β = 10.
Lastly, given our method's ability to accurately weight actions, it might also facilitate exploration in online GCRL, potentially in combination with self-imitation learning [44][45][46].For example, a recent study demonstrated that advantage-weighted supervised learning is a competitive method for learning from good experiences in GCRL settings [46].These promising directions warrant further exploration.
The goal-conditioned state value of s 2 : V π b (s 2 , g) = 0.99 3 + 0 2 ≈ 0.485 (39) The goal-conditioned state-action value for s 1 , g and a 1 , a 2 : Q π b (s 1 , a 1 , g) = 0.99 10 ≈ 0.904 Q π b (s 1 , a 2 , g) = 0.99 2 • 0.485 ≈ 0.475 (40) According to A π b (s, a, g) = Q π b (s, a, g) − V π b (s), we have: According to the state values, the states can be roughly divided into three regions colored by blue, yellow, and gray.For From the example, it's evident that a 2 is the optimal action towards the goal.However, due to the trajectory's failure post s 2 , its goal-conditioned advantage value, A π b (s 1 , a 2 , g), is less than that of A π b (s 1 , a 1 , g).In contrast, our region-advantage value focuses on the action's advantage towards the subsequent region.This provides a nuanced, short-term assessment of an action's quality, enabling a more accurate identification of optimal actions compared to relying solely on the goal-conditioned advantage value.
Figure 12 (b) underscores the advantages of integrating both goalconditioned and region-conditioned advantages.In this illustration, state s 3 presents two actions, both of which exhibit identical region-advantage values: Ãπ b (s 1 , a 1 , G) = Ãπ b (s 1 , a 3 , G).Additionally, the goal-conditioned advantage values reveal that A π b (s 3 , a 1 , g) > A π b (s 3 , a 3 , g).This suggests that blending goal-conditioned and region-conditioned advantages for re-weighting behavioral actions potentially offers enhanced outcomes compared to solely employing the region-conditioned advantage.
It is important to note that the numerical examples in this section serve primarily as illustrative tools.A more comprehensive and rigorous exploration, encompassing both theoretical and empirical analyses, is detailed in the main body of the paper.

B. Assessing GEAW with two value functions
In this experiment, we aim to ascertain if the performance of DAWOG is primarily due to the employment of two value networks with unique initializations.To test this, we substituted DAWOG's target region advantage with a goal-conditioned advantage.Specifically, we initialized two goal-conditioned state value functions, represented as {V θi } 2 i=1 , using distinct random seeds.These were then updated following the GEAW protocol.The policy was optimized using A π b i (s t , a t , g)) log π(a t | s t , g) where the advantage is calculated as A π b i (s t , a t , g) = r t + γV θi (s t+1 , g) − V θi (s t , g).We benchmarked this approach against both GEAW and DAWOG across four environments, as depicted in Figure 13.Our findings did not indicate that DAWOG's superior performance is solely due to the dual value networks with different initializations.

C. Assessing the effects of DAWOG's target region
In this subsection, we provide further evidence highlighting the efficacy of the dual-advantage in promoting short-term success for the agent.Figure 14 depicts the success rate at which the agent reaches the subsequent target region within a span of ten time steps.When comparing GEAW with DAWOG, it becomes evident that by harnessing the target region advantage, the agent more frequently reaches the target region in fewer steps, thereby progressing closer to the intended goal.

Fig. 5 :
Fig. 5: Training curves for different tasks using different algorithms, each one implementing a different weighting scheme: dual-advantage, no advantage, only goal-conditioned advantage, and only the target region advantage.The solid line and the shaded area respectively present the mean and the standard deviation computed from 4 independent runs.

Fig. 7 :
Fig. 7: Estimation error of goal-conditioned and target region value functions in Grid World tasks.

Fig. 10 :
Fig. 10: DAWOG's performance, evaluated on the AntMaze dataset, as a function of K, the number of state space partitions required to define the target regions.The box plot of the normalized return in AntMaze task is achieved by DAWOG in four settings when the target region size decreases (K increases).We used 4 runs with different seeds.Best performance (highest average returns and lowest variability) was consistently achieved across all settings with around K = 50 equally sized target regions.

Fig. 11 :
Fig. 11: DAWOG's performance, evaluated on six datasets, as a function of its two hyperparameters, β and β, controlling the goal-conditioned and targetbased exponential weights featuring in Equation15.The performance metric is the average return across 5 runs.To produce the results presented in Table2, we used a (potentially sub-optimal) fixed parameter combination: β = 10 and β = 10.

Fig. 13 :
Fig. 13: The plots depict the performance of DAWOG, standard GEAW, and GEAW utilizing two value functions (denoted as GEAW x2) across four environments.Each curve represents an average derived from 4 distinct random seeds.

Fig. 14 :
Fig. 14: Success rate comparison of two weighting strategies in reaching the subsequent target region.Here, the success rate indicates the likelihood of the agent successfully reaching the subsequent target region within a span of ten time steps.

Table 1 :
Experiment results in Gym Robotics.

Table 2 :
Experiment results in AntMaze environments.The results are normalized by the expert score from D4RL paper.The mean and the standard deviation are calculated by 4 independent runs.

Table 3 :
Experiment results for the two Grid World navigation environments.