On the identifiability of parameters in reinforcement learning models

In this post we describe some work we did recently testing whether commonly studied RL learning algorithms (e.g., Q-learning) are identifiable given observed patterns of choice behavior. This is a non-exhaustive exploration of the issue, but might be interesting to people who use these models in their work.

Background

Reinforcement learning (RL) is a computational framework for how agents (artificial or otherwise) can learn to make decisions based on experience with the environment. It has become a dominant theory of learning in computational cognitive science and neuroscience and has been successful in many applications (e.g., robots).

Figure 1. A complex and irregular surface. Here the height of the curves reflect the quality of a model fit as a function of two hypothetical models parameters. Clearly there are multiple solutions that "fit" equally well in the space making it hard to identify certain parameter combinations. For many models used in psychology, the shape of this surface is unknown and difficult to estimate.

When using a reinforcement learning model to fit human or animal data, the experimenter typically tries to estimate the latent parameters of the model from the participant’s observed choice behavior. For example, a researcher might optimize parameters of RL models fit to human data using Maximum Likelihood Estimation to obtain estimates of a participant’s learning rate. A key scientific question concerns how accurate these type of fits actually are. There many ways in which model fits of this type might be led astray.

For example, in perhaps the worst case scenario participants are using a completely different learning strategy than the one posited by the model. In this case parameter estimates from the (incorrect) model may be a poor characterization of the participant’s actual behavior. Less obviously, even if the model is “correct” (i.e., participants really are following a learning strategy identical to the one suggested by the model) it may not be possible to identify the model parameters given a sample of a participant’s behavior. For example, the likelihood space of the model may not be “well behaved” (i.e., there could be multiple local or global minima which confuse the parameter optimization routine, see Figure 1).

The Question

The topic of this post is to ask how identifiable the parameters of common RL algorithms are under a particular set of conditions of interest to our lab. In particular, we wondered if parameter estimates are stable and identifiable for RL algorithms applied to dynamic decision making tasks. These tasks involve environments where the reward given for choosing a certain option is dependent on the history of previous choices made by the agent. Because experienced outcomes are strongly dependent on the particular choices that the agent made, it seems plausible that this class of experiments might have a flat or multi-modal maximum likelihood space, leading to parameter fits that mischaracterize participant’s actual learning strategies.

One interesting way to evaluate this is to first generate simulated data from a model with a given set of parameters. This simulated data acts as a stand-in for empirical data from a human subject in an experiment. However, critically we know in this case what parameter combination and model generated the data. Next, we fit the simulated data using the standard approach in the field (e.g., maximum likelihood estimation).

In an ideal world, we would be able to faithfully recover the set of parameters we used to generate the simulated choice data. Alternatively, if our parameter estimates are way off the mark, we should be cautious about interpreting the fit of such models to human subjects. The reason is that with human subjects we are uncertain about both the model AND the parameters, whereas in our simulation study we know our model is correct and are just asking about the parameters.

If our fitting approach can’t even make sense of things in the best case, we should worry about the worse case (see Figure 2 for a summary)!

Figure 2. Top panel shows the standard empirical situation. A human with unknown parameters performs a task generating observable data. The RL model is fit to the human data providing estimates of the latent parameters. There is no direct way to tell if the parameters are accurate. The bottom panel shows the situation explored here. The RL models with known parameter settings is applied to the same experiment given to human participants. The model's choice data are re-fit with the same RL model. The hope is that the parameter estimates recovered match closely the ones used to generate the data in the first place.

RL Models of Dynamic Decision Making

The RL model that we considered is based on the one described in Gureckis & Love (2009) which itself was based on the famous Q-learning algorithm from computer science (Watkins 1989). This model is thought to be a good description of how people perform dynamic decision making tasks and thus provides an interesting domain in which to explore this issue. There are three free parameters in the model (α, γ, and τ, see Gureckis & Love, 2009 for a full description).

In our simulation experiment, we applied the model to the same task used in Gureckis & Love (2009). The task, presented as a simple video game called “Farming on Mars”, involved subjects choosing between two different Martian farming robots with the goal of generating the most oxygen possible for a future population of human inhabitants. One robot (called the Long-Term robot) would return 400 + 1000 * h/10 units of oxygen where h was the number of times that robot was chosen in the last 10 trials. The other robot (the Short-Term robot) gave 900 + 1000 * h/10 units. Thus, while the Short-Term robot would give higher rewards on each individual trial, the optimal strategy is to always select the Long-Term robot. However, subjects in the experiment were unaware that the rewards were dependent on their previous choices and would have to discover this over time.

A Simulation Experiment

As mentioned above, Gureckis & Love’s (2009) data were best-fit by a Q-Learning model which incorporated information about the latent state transitions of the task. To test if these fits were identifiable given the patterns of choice behavior recorded in the task we used the approach outlined in Figure 2.

Starting with a wide range of parameter combinations (i.e., various setting for α, γ, and τ), we generated choice data for a 500 trial “experiment” from 100 simulated subjects per combination of parameters. In the “Farming on Mars” task human subjects generally learned to maximize, i.e. choose the Long-Term option most of the time. Therefore, we were only concerned with parameter combinations that might generate human-like, maximizing behavior. We defined a simulation as consistently maximizing if it chose the Long-Term reward more than 60% of the time in the last 100 trials. For each parameter combination, we simulated 100 hypothetical “subjects” and classified the parameter combination a consistently maximizing if more 60% of the simulated subjects were maximized more the 60% of their choices. This is roughly equivalent to a less that 5% chance that the subjects were responding randomly. Across all 1000 combinations that we tested, only 12 combinations consistently displayed maximizing behavior. The table below shows all of the maximizing combinations and the number of simulated subjects (out of 100) that showed maximizing behavior for each.

Results

Having found a set of human-like parameters, we were now ready to answer our original question. We used Maximum Likelihood Estimation to fit parameters to each of the 100 subjects generated for each of the 12 parameter combinations. Many of these resulted in fairly good estimations of the parameters. For each simulated subject, we attempted to fit parameters to that set of data starting in 50 random locations in the search space and chose the one with the highest likelihood. Finally, we looked at the median of the best fits over all 100 simulated subjects to get an idea of how close the typical parameter fit was (these medians are in the last three columns of the table below.)

Taus Alphas Gammas # Maximizing Simulations/100 Median Fit Taus Median Fit Alphas Median Fit Gammas
1 1330 1 0.4 61 1292.330961 1.002661302 0.401409669
2 1520 0.9 0.3 68 1449.070059 0.916268368 0.305883338
3 1520 1 0.3 64 1472.299003 1.0107299 0.306711376
4 1710 0.9 0.2 60 1794.842892 0.918753426 0.207316601
5 1710 0.9 0.3 63 1591.74387 0.914328119 0.305986909
6 1710 1 0.3 65 1737.748628 1.00735407 0.30399517
7 1710 1 0.4 61 1721.370366 1.01534019 0.402315334
8 1900 0.9 0.2 64 1947.288166 0.91075461 0.20415441
9 1900 0.9 0.3 74 1943.371389 0.913595281 0.300533485
10 1900 0.9 0.4 61 2012.40356 0.910731432 0.403546881
11 1900 1 0.3 72 1869.468093 1.00419566 0.303685677
12 1900 1 0.4 60 1879.272986 1.008092814 0.401227046

Of course, the recovered parameters do not match the inputted parameters exactly, but we wondered if the differences were important enough to raise concern. For example, do the recovered parameters make entirely different predictions about how participants might behave in the task? To assess this, we generated new simulated choice data with the recovered parameters and compared the overall performance of these simulations to the performance of the original 12 parameter combinations. You can think of this exercise a bit like the game “telephone”: We create a model with a known setting of parameters, generate a (noisy) sample of behavior from this model, fit this data with the same model, then generate a new (noisy) sample of behavior and see how they compare. The final fits are thus the noisy signal the remains after standard steps one would encounter in an empirical study.

Figure 3 below plots the percentage of maximizing choices on each trial averaged across the 100 simulated subjects for both the original parameters and the “recovered” parameters. We binned each trial into successive blocks of 50 trials and the plotted the mean of each bin. Qualitatively, most parameter estimates line up quite well with the originally generated data.

As you can see, in most cases the recovered parameters are very similar to the parameters used to generate the data. In a few cases there are minor discrepancies which most likely have to do with the parameter optimization procedure getting stuck in a local minima.


Figure 3. A comparison of behavior in the task for both the original simulations using the 12 maximizing parameters (in blue) and the maximum likelihood fits of those simulations (in green). The numbers for the plots correspond to the rows in the table.

Conclusion

Overall, the our exploration of the identifiability of the Q-learning model was reassuring. For parameter combinations that lead to long-term reward maximizing behavior, the fits to these simulations were faithful to the values we seeded our simulations with. This suggests that, were the Q-learning model the exactly correct model of human learning and decision making in these task, we would be safe in making inferences about the value of these parameters. This simulation also give a bit of further support to the approach used in Gureckis & Love (2009).

Overall, this is an important and informative exercise in estimating the value of a model fit. To facilitate replication we have shared our Julia code below which we used in this simulation experiment.

References

Gureckis, T.M. and Love, B.C. (2009) Short Term Gains, Long Term Pains: How Cues About State Aid Learning in Dynamic Environments. Cognition, 113, 293-313.

Watkins, C. (1989). Learning from delayed rewards. Unpublished doctoral
dissertation. Cambridge, England: Cambridge University.

Code

Here is the Julia code for simulating and fitting the data:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
alpha = 0.1
gamma = 0.1
tau = 500
theta = 650
nArms = 2
nPlays = 500
nStates = 11
nUnits = 14
nTaus = 10
nAlphas = 10
nGammas = 10
nStarts = 50
mTau = 1900
mAlpha = 1
mGamma = 1
nSubs = 100
type Results
choices::Array{Float64,1}
rewards::Array{Float64,1}
end
function simulateQ(theta, alpha, tau, gamma, nArms, nPlays, nStates, nUnits)
wA = [ones(nUnits) for x = 1:nArms]
optimal = zeros(nPlays)
rewards = zeros(nPlays)
choices = zeros(nPlays)
i = zeros(nUnits)
prevI = zeros(nUnits)
sP = 0
s = 0
qSA = ones(nStates, nArms) * theta
for pi = 1:nPlays
sum = 0
# Softmax
for a = 1:nArms
sum += exp(qSA[s + 1, a]/tau)
end
r = rand()
arm = 1
gibbsSum = exp(qSA[s + 1, arm]/tau)/sum
while gibbsSum < r
arm += 1
gibbsSum += exp(qSA[s + 1, arm]/tau)/sum
end
choices[pi] = arm
prevI = i
#Get reward
if arm == 1
#Long-term
reward = 400 + 1000 * s/10
if sP < 10 sP += 1 end i[13] = 0 i[14] = reward/mTau else #Short-term reward = 900 + 1000 * s/10 if sP > 0
sP -= 1
end
i[13] = reward/mTau
i[14] = 0
end
i[1] = sP/10
i[2:12] = 0
i[sP + 2] = 1
rewards[pi] = reward
maxA = 1
if qSA[sP + 1, 1] maxA = 2
end
#Update w
delta = reward + gamma * qSA[sP + 1, maxA] - qSA[s + 1, arm]
wA[arm] = wA[arm] .+ (alpha * delta * prevI)
qSA[s + 1, arm] = dot(wA[arm], i)
s = sP
println(wA)
end
return Results(choices, rewards)
end
function fitQ(params, choices, rewards, theta, nArms, nStates)
wA = [ones(nUnits) for x = 1:nArms]
ll = 0
sP = 0
s = 0
i = zeros(nUnits)
prevI = zeros(nUnits)
qSA = ones(nStates, nArms) * theta
for ci = 1:length(choices)
sum = 0
arm = choices[ci]
reward = rewards[ci]
# softmax
for a = 1:nArms
sum += exp(qSA[s + 1, a]/params[1])
end
# probability of picking arm
prob = exp(qSA[s + 1, arm]/params[1])/sum
prevI = i
# fix for more states
if arm == 1
if sP < 10 sP += 1 end i[13] = 0 i[14] = reward/mTau else if sP > 0
sP -= 1
end
i[13] = reward/mTau
i[14] = 0
end
i[1] = sP/10
i[2:12] = 0
i[sP + 2] = 1
# pick a'
maxA = 1
if qSA[sP + 1, 1] maxA = 2
end
delta = reward + params[3] * qSA[sP + 1, maxA] - qSA[s + 1, arm]
wA[arm] = wA[arm] .+ (params[2] * delta * prevI)
qSA[s + 1, arm] = dot(wA[arm], i)
s = sP
# negative log likelihood
ll += log(prob)
end
return -ll
end
#Classify parameter combinations as maximizing or meliorating
classifier = DataFrame()
classifier["Tau"] = zeros(nTaus * nAlphas * nGammas)
classifier["Alpha"] = zeros(nTaus * nAlphas * nGammas)
classifier["Gamma"] = zeros(nTaus * nAlphas * nGammas)
classifier["nMax"] = zeros(nTaus * nAlphas * nGammas)
classifier["nMel"] = zeros(nTaus * nAlphas * nGammas)
classifier["Max"] = zeros(nTaus * nAlphas * nGammas)
classifier["Mel"] = zeros(nTaus * nAlphas * nGammas)
classifier["Rand"] = zeros(nTaus * nAlphas * nGammas)
index = 1
for tau = mTau/nTaus:mTau/nTaus:mTau
for alpha = mAlpha/nAlphas:mAlpha/nAlphas:mAlpha
for gamma = mGamma/nGammas:mGamma/nGammas:mGamma
classifier["Tau"][index] = tau
classifier["Alpha"][index] = alpha
classifier["Gamma"][index] = gamma
nMax = 0
nMel = 0
for i = 1:nSubs
sim = simulateQ(theta, alpha, tau, gamma, nArms, nPlays, nStates, nUnits)
aveChoice = mean(sim.choices[400:nPlays] - 1)
if aveChoice = .6
nMel += 1
end
end
classifier["nMax"][index] = nMax
classifier["nMel"][index] = nMel
if nMax >= 60
classifier["Max"][index] = 1
elseif nMel >= 60
classifier["Mel"][index] = 1
else
classifier["Rand"][index] = 1
end
index += 1
end
end
end
#Matrix containing just maximizing parameters
m = classifier[:(Max .== 1), :]
#Learning curves for maximizing parameters
learncurves = DataFrame()
learncurves["Taus"] = m["Taus"]
learncurves["Alphas"] = m["Alphas"]
learncurves["Gammas"] = m["Gammas"]
learncurves["Percent LT by trial"] = [zeros(nPlays) for i = 1:length(m[1])]
for index = 1:length(m[1])
tau = learncurves["Taus"][index]
alpha = learncurves["Alphas"][index]
gamma = learncurves["Gammas"][index]
for i = 1:nSubs
sim = simulateQ(theta, alpha, tau, gamma, nArms, nPlays, nStates, nUnits)
learncurves["Percent LT by trial"][index] += (sim.choices - 1)
end
learncurves["Percent LT by trial"][index] /= nSubs
learncurves["Percent LT by trial"][index] = 1 - learncurves["Percent LT by trial"][index]
end
#Confusion Matrix for Maximizing Parameters
confusion = DataFrame()
confusion["Taus"] = zeros(length(m[1]))
confusion["Alphas"] = zeros(length(m[1]))
confusion["Gammas"] = zeros(length(m[1]))
confusion["Median Fit Taus"] = zeros(length(m[1]))
confusion["Median Fit Alphas"] = zeros(length(m[1]))
confusion["Median Fit Gammas"] = zeros(length(m[1]))
confusion["MinTaus"] = [zeros(nSubs) for i = 1:length(m[1])]
confusion["MinAlphas"] = [zeros(nSubs) for i = 1:length(m[1])]
confusion["MinGammas"] = [zeros(nSubs) for i = 1:length(m[1])]
for index = 1:length(m[1])
tau = m["Tau"][index]
alpha = m["Alpha"][index]
gamma = m["Gamma"][index]
minTaus = zeros(nSubs)
minAlphas = zeros(nSubs)
minGammas = zeros(nSubs)
for i = 1:nSubs
sim = simulateQ(theta, alpha, tau, gamma, nArms, nPlays, nStates, nUnits)
minLL = 1000
for j = 1:nStarts
sTau = rand() * mTau
sAlpha = rand() * mAlpha
sGamma = rand() * mGamma
o = Optim.optimize(p -> fitQ(p, sim.choices, sim.rewards, theta, nArms, nStates), [sTau, sAlpha, sGamma])
if o.f_minimum < minLL
minLL = o.f_minimum
(minTaus[i], minAlphas[i], minGammas[i]) = o.minimum
end
end
end
confusion["Taus"][index] = tau
confusion["Alphas"][index] = alpha
confusion["Gammas"][index] = gamma
confusion["Median Fit Taus"][index] = median(minTaus)
confusion["Median Fit Alphas"][index] = median(minAlphas)
confusion["Median Fit Gammas"][index] = median(minGammas)
confusion["MinTaus"][index] = minTaus
confusion["MinAlphas"][index] = minAlphas
confusion["MinGammas"][index] = minGammas
end
#Learning curves for fits
fitcurves = DataFrame()
taus = confusion["MinTaus"]
alphas = confusion["MinAlphas"]
gammas = confusion["MinGammas"]
fitcurves["Taus"] = confusion["Taus"]
fitcurves["Alphas"] = confusion["Alphas"]
fitcurves["Gammas"] = confusion["Gammas"]
fitcurves["Percent LT by trial"] = [zeros(nPlays) for i = 1:length(taus)]
for index = 1:length(taus)
for i = 1:length(taus[index])
tau = taus[index][i]
alpha = alphas[index][i]
gamma = gammas[index][i]
sim = simulateQ(theta, alpha, tau, gamma, nArms, nPlays, nStates, nUnits)
fitcurves["Percent LT by trial"][index] += (sim.choices - 1)
end
fitcurves["Percent LT by trial"][index] /= nSubs
fitcurves["Percent LT by trial"][index] = 1 - fitcurves["Percent LT by trial"][index]
end






  1. This is very cool and somewhat surprising. I guess the problem may arise when the true model and parameters are both unknown. A model can be yoked (i.e., trial-by-trial fit) to a participant’s past responses and provide a good fit, even when the model alone could never accomplish the task.

    Brad Love
  2. Great post! But I’m curious as to what happens when you add noise–whether random or due to some other model. In my admittedly limited experience working with this type of paradigm, it seems very unlikely that what’s going on in subjects’ heads is well-described by a single model all of the time. Even if we allow that Gureckis & Love (2009) provides the single best description of what’s going on, it seems unthinkable that most subjects wouldn’t be experimenting with other strategies/learning approaches at least some of the time–particularly early on. I’m curious if you still recapture the parameters reasonably accurately if you have a second-order parameter that, say, chooses the Gureckis & Love (2009) model 80% of the time, responds randomly 10% of the time, and uses some other simple strategy (e.g., win-stay/lose-switch) 10% of the time. The point being that while the ability to recapture the parameters will obviously break down at some point, it would be very informative to know where that point is.

    The other question is how much data you need for the simulation results to stabilize. You use 500 trials here, but that’s considerably more than many people who fit RL models have available. What happens if you only have, say, 100 trials?

    Tal Yarkoni
  3. [...] or as a general “sanity-check” for any model (the model, after being fit, should generate the same pattern of behavior). Obviously, it was still up to us to determine which types of noise to consider (we could have [...]

    thinking about thinking » When does a rational model “fit”?
  4. I would like to try out the code, but it seems that the > (greater than) and < (less than) signs have all be manhandled along the way. I have tried to correct it but there are some areas that still seem to be missing something.

    For example line 63:63 read:
    if qSA[sP + 1, 1] maxA = 2
    end

    I took this to mean if (qSA[sP + 1, 1]) resolves to true maxA = 2. But qSa is a matrix of Float64 and, unlike integers, cannot be treated as a boolean. So I am not clear as to what the intent was.

    Seems that a Github gist might have be a better place to post this code.

    Any chance of getting the correct version?
    Andre

    Andre
  5. Hey Andre, good call, the line should be:
    if qSA[sP + 1, 1] < = qSA[sP + 1, 2]
    Not sure how that got screwed up but there may be other typos in there so here's the github gist: https://gist.github.com/dhalpern/c241f89a68bc799a71bb

    David Halpern