sequence about Reinforcement Studying (RL), following Sutton and Barto’s well-known guide “Reinforcement Studying” [1].
Within the earlier posts we completed dissecting Half I of mentioned guide, which introduces basic answer strategies which type the premise for a lot of RL strategies. These are: Dynamic Programming (DP), Monte Carlo strategies (MC) and Temporal Distinction Studying (TD). What separates Half I from Half II of Sutton’s guide, and justifies the excellence, is a constraint on the issue measurement: whereas in Half I tabular answer strategies have been lined, we now dare to dive deeper into this fascinating matters and embody operate approximation.
To make it particular, in Half I we assumed the state area of the issues underneath investigation to be sufficiently small s.t. we may characterize it and in addition the discovered options through a easy desk (think about a desk denoting a sure “goodness” – a price – for every state). Now, in Half II, we drop this assumption, and are thus capable of sort out arbitrary issues.
And this modified setup is dearly wanted, as we may observe first-hand: in a earlier submit we managed to study to play Tic Tac Toe, however already failed for Join 4 – because the variety of states right here is within the order of 10²⁰. Or, think about an RL drawback which learns a activity primarily based on digicam photos: the variety of attainable digicam photos is bigger than the variety of atoms within the recognized universe [1].
These numbers ought to persuade everybody that approximate answer strategies are completely obligatory. Subsequent to enabling tackling such issues, additionally they supply generalization: for tabular strategies, two shut, however nonetheless totally different states have been handled fully separate – whereas for approximate answer strategies, we might hope that our operate approximation can detect such shut states and generalize.
With that, let’s start. Within the subsequent few paragraphs, we’ll:
- give an introduction to operate approximation
- produce answer strategies for such issues
- talk about totally different decisions for approximation features.
Introduction to Perform Approximation
Versus tabular answer strategies, for which we used a desk to characterize e.g. worth features, we now use a parametrized operate

with a weight vector

v could be something, equivalent to a linear operate of the enter values, or a deep neural community. Later on this submit we’ll talk about totally different prospects in particulars.
Often, the variety of weights is far smaller than the variety of states – which yields generalization: once we replace our operate by adjusting some weights, we don’t simply replace a single entry in a desk – however this can impact (probably) all different estimates, too.
Let’s recap the updates guidelines from a number of of the strategies we noticed in earlier posts.
MC strategies assign the noticed return G as worth estimate for a state:

TD(0) bootstraps the worth estimate of the subsequent state:

Whereas DP makes use of:

To any extent further, we’ll interpret updates of the shape s -> u as enter / output pairs of a operate we want to approximate, and for this use strategies from machine studying, specifically: supervised studying. Duties the place numbers (u) must be estimated is named operate approximation, or regression.
To unravel this drawback, we are able to in concept resort to any attainable technique for such activity. We are going to talk about this in a bit, however ought to point out that there are specific necessities on such strategies: for one, they need to be capable to deal with incremental modifications and datasets – since in RL we normally construct up expertise over time, which differs from, e.g. classical supervised studying duties. Additional, the chosen technique ought to be capable to deal with non-stationary targets – which we’ll talk about within the subsequent subsection.
The Prediction Goal
All through Half I of Sutton’s guide, we by no means wanted a prediction goal or related – in spite of everything, we may at all times converge to the optimum operate which described every state’s worth completely. As a result of causes said above, that is now not attainable – requiring us to outline an goal, a price operate, which we wish to optimize.
We use the next:

Let’s attempt to perceive this. That is an expectation over the distinction between predicted and precise values, which, intuitively is smart and is widespread in supervised studying. Word that this requires us to outline a distribution µ, which specifies how a lot we care about sure states.
Usually, this merely is a measure proportional to how usually states are visited – the on-policy-distribution, on which we’ll focus on this part.
Nevertheless, observe that it’s really not clear whether or not that is the best goal: in RL, we care about discovering good insurance policies. Some technique of ours may optimize above goal extraordinarily effectively, however nonetheless fail to resolve the issue at hand – e.g. when the coverage spends an excessive amount of time in undesired states. Nonetheless, as mentioned, we want one such goal – and because of lack of different prospects, we simply optimize this.
Subsequent, let’s introduce a way for minimizing this goal.
Minimizing the Prediction Goal
The device we decide for this activity is Stochastic Gradient Descent (SGD). In contrast to Sutton, I don’t wish to go into too many particulars right here, and solely concentrate on the RL half – so I want to refer the reader to [1] or another tutorial on SGD / deep studying.
However, in precept, SGD makes use of batches (or mini batches) to compute the gradient of the target and replace the weights a small step within the route minimizing this goal.
For thus, this gradient is:

Now the attention-grabbing half: assume that v_π is just not the true goal, however some (noisy) approximation of it, say U_t:

We will present that if U_t is an unbiased of v_π, then the answer obtained through SGD converges to an area optimum – handy. We will now merely use e.g. the MC return as U_t, and procure our very first gradient RL technique:

It is usually attainable to make use of different measures for U_t, specifically additionally use bootstrapping, i.e. use earlier estimates. When doing so, we lose these convergence ensures – however as so usually empirically this nonetheless works. Such strategies are referred to as semi-gradient strategies – since they solely think about the impact of fixing the weights on the worth to replace, however not on the goal.
Primarily based on this we are able to introduce TD(0) with operate approximation:

A pure extension of this, and likewise an extension to the corresponding n-step tabular technique, is n-step semi-gradient TD:

Strategies for Perform Approximation
Within the the rest of Chapter 9 Sutton describes alternative ways of representing the approximate operate: a big a part of the chapter covers linear operate approximation and have design for this, and for non-linear operate approximation synthetic neural networks are launched. We are going to solely briefly cowl these matters, as on this weblog we primarily work with (deep) neural networks and never easy linear approximations, and in addition suspect the astute reader is already accustomed to fundamentals of deep studying and neural networks.
Linear Perform Approximation
Nonetheless, let’s briefly talk about linear approximation. On this, the state-value operate is approximated by the interior product:

Right here, the state is described by the vector

– and, as we are able to see, this can be a linear mixture of the weights.
As a result of simplicity of the illustration, there are some elegant formulation (and closed-loop representations) for the answer, in addition to some convergence ensures.
Characteristic Building for Linear Strategies
A limitation of the above launched naive linear operate approximation is that every characteristic is used individually, and no mixture of options is feasible. Sutton lists the issue cart pole for instance: right here, excessive angular velocity could be good or unhealthy, relying on the context. When the pole is properly centered, one ought to in all probability keep away from fast, jerky actions. Nevertheless, the nearer the pole will get to falling over, the sooner velocities could be wanted.
There’s thus a separate department of analysis about designing environment friendly characteristic representations (though one may argue, that as a result of rise of deep studying, that is changing into much less vital).
One such representations are polynomials. As an introductory instance, think about the state vector is comprised of two elements, s_1 and s_2. We may thus outline the characteristic area:

Then, utilizing this illustration, we may nonetheless do linear operate approximation – i.e. use 4 weights to the 4 newly constructed options, and general nonetheless have a linear operate w.r.t. the weights.
Extra typically, the polynomial-basis options of order n+1 could be represented by

the place the c’s are integers in {0 … n}.
Different generally used bases are the Fourier foundation, coarse and tile coding, and radial foundation features – however as talked about we is not going to dive deeper at this level.
Conclusion
On this submit we made an vital step past the earlier posts in the direction of deploying RL algorithms “within the wild”. Within the previous posts, we targeted on introducing the important RL strategies, albeit within the type of tabular strategies. We noticed that they shortly attain their limits when deployed to bigger issues and thus realized that approximate answer strategies are wanted.
On this submit we launched fundamentals for this. Subsequent to enabling the tackling of large-scale, real-world issues, these strategies additionally introduce generalization – a robust necessity for any profitable RL algorithm.
We started by introducing an acceptable prediction goal and methods of optimizing this.
Then we launched our first gradient and semi-gradient RL algorithms for the prediction goal – that’s studying a price operate for a given coverage.
Lastly we mentioned alternative ways for setting up the approximation operate.
As at all times, thanks for studying! And in case you are , keep tuned for the subsequent submit through which we’ll dive into the corresponding management drawback.
Different Posts on this Sequence
References














