IIT-M RL-ASSIGNMENT-2-GRIDWORLD
Solution for submission 132227
A detailed solution for submission 132227 submitted for challenge IIT-M RL-ASSIGNMENT-2-GRIDWORLD
What is the notebook about?¶
Problem - Gridworld Environment Algorithms¶
This problem deals with a grid world and stochastic actions. The tasks you have to do are:
- Implement Policy Iteration
- Implement Value Iteration
- Implement TD lamdda
- Visualize the results
- Explain the results
How to use this notebook? 📝¶
This is a shared template and any edits you make here will not be saved.You should make a copy in your own drive. Click the "File" menu (top-left), then "Save a Copy in Drive". You will be working in your copy however you like.
Update the config parameters. You can define the common variables here
Variable | Description |
---|---|
AICROWD_DATASET_PATH |
Path to the file containing test data. This should be an absolute path. |
AICROWD_RESULTS_DIR |
Path to write the output to. |
AICROWD_ASSETS_DIR |
In case your notebook needs additional files (like model weights, etc.,), you can add them to a directory and specify the path to the directory here (please specify relative path). The contents of this directory will be sent to AIcrowd for evaluation. |
AICROWD_API_KEY |
In order to submit your code to AIcrowd, you need to provide your account's API key. This key is available at https://www.aicrowd.com/participants/me |
- Installing packages. Please use the Install packages 🗃 section to install the packages
Setup AIcrowd Utilities 🛠¶
We use this to bundle the files for submission and create a submission on AIcrowd. Do not edit this block.
!pip install aicrowd-cli > /dev/null
AIcrowd Runtime Configuration 🧷¶
Get login API key from https://www.aicrowd.com/participants/me
import os
import matplotlib.pyplot as plt
import seaborn as sns
AICROWD_DATASET_PATH = os.getenv("DATASET_PATH", os.getcwd()+"/a5562c7d-55f0-4d06-841c-110655bb04ec_a2_gridworld_inputs.zip")
AICROWD_RESULTS_DIR = os.getenv("OUTPUTS_DIR", "results")
API_KEY = "" # Get your key from https://www.aicrowd.com/participants/me
!unzip -q $AICROWD_DATASET_PATH
DATASET_DIR = 'inputs/'
GridWorld Environment¶
Read the code for the environment thoroughly
Do not edit the code for the environment
import numpy as np
class GridEnv_HW2:
def __init__(self,
goal_location,
action_stochasticity,
non_terminal_reward,
terminal_reward,
grey_in,
brown_in,
grey_out,
brown_out
):
# Do not edit this section
self.action_stochasticity = action_stochasticity
self.non_terminal_reward = non_terminal_reward
self.terminal_reward = terminal_reward
self.grid_size = [10, 10]
# Index of the actions
self.actions = {'N': (1, 0),
'E': (0,1),
'S': (-1,0),
'W': (0,-1)}
self.perpendicular_order = ['N', 'E', 'S', 'W']
l = ['normal' for _ in range(self.grid_size[0]) ]
self.grid = np.array([l for _ in range(self.grid_size[1]) ], dtype=object)
self.grid[goal_location[0], goal_location[1]] = 'goal'
self.goal_location = goal_location
for gi in grey_in:
self.grid[gi[0],gi[1]] = 'grey_in'
for bi in brown_in:
self.grid[bi[0], bi[1]] = 'brown_in'
for go in grey_out:
self.grid[go[0], go[1]] = 'grey_out'
for bo in brown_out:
self.grid[bo[0], bo[1]] = 'brown_out'
self.grey_outs = grey_out
self.brown_outs = brown_out
def _out_of_grid(self, state):
if state[0] < 0 or state[1] < 0:
return True
elif state[0] > self.grid_size[0] - 1:
return True
elif state[1] > self.grid_size[1] - 1:
return True
else:
return False
def _grid_state(self, state):
return self.grid[state[0], state[1]]
def get_transition_probabilites_and_reward(self, state, action):
"""
Returns the probabiltity of all possible transitions for the given action in the form:
A list of tuples of (next_state, probability, reward)
Note that based on number of state and action there can be many different next states
Unless the state is All the probabilities of next states should add up to 1
"""
grid_state = self._grid_state(state)
if grid_state == 'goal':
return [(self.goal_location, 1.0, 0.0)]
elif grid_state == 'grey_in':
npr = []
for go in self.grey_outs:
npr.append((go, 1/len(self.grey_outs),
self.non_terminal_reward))
return npr
elif grid_state == 'brown_in':
npr = []
for bo in self.brown_outs:
npr.append((bo, 1/len(self.brown_outs),
self.non_terminal_reward))
return npr
direction = self.actions.get(action, None)
if direction is None:
raise ValueError("Invalid action %s , please select among" % action, list(self.actions.keys()))
dir_index = self.perpendicular_order.index(action)
wrap_acts = self.perpendicular_order[dir_index:] + self.perpendicular_order[:dir_index]
next_state_probs = {}
for prob, a in zip(self.action_stochasticity, wrap_acts):
d = self.actions[a]
next_state = (state[0] + d[0]), (state[1] + d[1])
if self._out_of_grid(next_state):
next_state = state
next_state_probs.setdefault(next_state, 0.0)
next_state_probs[next_state] += prob
npr = []
for ns, prob in next_state_probs.items():
next_grid_state = self._grid_state(ns)
reward = self.terminal_reward if next_grid_state == 'goal' else self.non_terminal_reward
npr.append((ns, prob, reward))
return npr
def step(self, state, action):
npr = self.get_transition_probabilites_and_reward(state, action)
probs = [t[1] for t in npr]
sampled_idx = np.random.choice(range(len(npr)), p=probs)
sampled_npr = npr[sampled_idx]
next_state = sampled_npr[0]
reward = sampled_npr[2]
is_terminal = next_state == tuple(self.goal_location)
return next_state, reward, is_terminal
Example environment¶
This has the same setup as the pdf, do not edit the settings
def get_base_kwargs():
goal_location = (9,9)
action_stochasticity = [0.8, 0.2/3, 0.2/3, 0.2/3]
grey_out = [(3,2), (4,2), (5,2), (6,2)]
brown_in = [(9,7)]
grey_in = [(0,0)]
brown_out = [(1,7)]
non_terminal_reward = 0
terminal_reward = 10
base_kwargs = {"goal_location": goal_location,
"action_stochasticity": action_stochasticity,
"brown_in": brown_in,
"grey_in": grey_in,
"brown_out": brown_out,
"non_terminal_reward": non_terminal_reward,
"terminal_reward": terminal_reward,
"grey_out": grey_out,}
return base_kwargs
base_kwargs = get_base_kwargs()
Task 2.1 - Value Iteration¶
Run value iteration on the environment and generate the policy and expected reward
def value_iteration(env, gamma):
# Initial Values
values = np.zeros((10, 10))
# Initial policy
policy = np.empty((10, 10), object)
policy[:] = 'N' # Make all the policy values as 'N'
# Begin code here
extra_info = {'delta':[]} #Put your extra information needed for plots etc in this dictionary
delta=1e-8
brown_in_loc = np.where(env.grid == 'brown_in')
brown_out_loc = np.where(env.grid == 'brown_out')
grey_in_loc = np.where(env.grid == 'grey_in')
state_brown_in = brown_in_loc[0][0], brown_in_loc[1][0]
state_brown_out = brown_out_loc[0][0], brown_out_loc[1][0]
state_grey_in = grey_in_loc[0][0], grey_in_loc[1][0]
extra_info['brown_in']=[]
extra_info['brown_out']=[]
extra_info['grey_in']=[]
possible_actions=list(env.actions.keys())
while (delta>=1e-8): #repeat until delta reduces less than 1e-8
delta=0.0
vg,pl = np.zeros((10, 10)),np.empty((10, 10), object)
for x in range(env.grid_size[0]): #go over all x-coordinates
for y in range(env.grid_size[1]): #go over all y-coordinates
temp_J=[]
for a in possible_actions: #go over all possible actions
nextstates_probs_rews=env.get_transition_probabilites_and_reward((x,y),a) #obtain possible next states, probabilities, rewards from environment
# print('a,temp_J,npr: ',a,temp_J,nextstates_probs_rews)
sum=0.0
for npr in nextstates_probs_rews: #go over all possible next states
sum+=npr[1]*(npr[2]+gamma*values[(npr[0][0],npr[0][1])]) #approach optimal value incrementally by using operator T
temp_J.append(sum)
max_J=max(temp_J) #find maximum value amongst all action's values
delta=max(delta,abs(values[(x,y)]-max_J)) #maximum value difference
extra_info['delta'].append(delta)
vg[(x,y)],pl[(x,y)]=max_J,possible_actions[temp_J.index(max_J)] #updating value, policy
values,policy=vg.copy(),pl.copy() #.copy() to avoid call by reference
extra_info['brown_in'].append(values[state_brown_in])
extra_info['brown_out'].append(values[state_brown_out])
extra_info['grey_in'].append(values[state_grey_in])
# End code
# Do not change the number of output values
return {"Values": values, "Policy": policy}, extra_info
env = GridEnv_HW2(**base_kwargs)
res, extra_info = value_iteration(env, 0.7)
# The rounding off is just for making print statement cleaner
print(np.flipud(np.round(res['Values'], decimals=2)))
print(np.flipud(res['Policy']))
Task 2.2 - Policy Iteration¶
Run policy iteration on the environment and generate the policy and expected reward
def policy_iteration(env, gamma):
# Initial Values
values = np.zeros((10, 10))
# Initial policy
policy = np.empty((10, 10), object)
policy[:] = 'N' # Make all the policy values as 'N'
# Begin code here
extra_info = {} #Put your extra information needed for plots etc in this dictionary
time_local,time_global,unequal_policy=0,0,0
extra_info['time_local'],extra_info['time_global'],extra_info['delta'],extra_info['unequal_policy']=[],[],[],[]
done=0
brown_in_loc = np.where(env.grid == 'brown_in')
brown_out_loc = np.where(env.grid == 'brown_out')
grey_in_loc = np.where(env.grid == 'grey_in')
state_brown_in = brown_in_loc[0][0], brown_in_loc[1][0]
state_brown_out = brown_out_loc[0][0], brown_out_loc[1][0]
state_grey_in = grey_in_loc[0][0], grey_in_loc[1][0]
extra_info['brown_in']=[]
extra_info['brown_out']=[]
extra_info['grey_in']=[]
extra_info['values']=[]
possible_actions=list(env.actions.keys())
while (done==0):
time_global+=1
delta=1e-8
while (delta>=1e-8): #repeat until delta reduces less than 1e-8
delta=0.0
time_local+=1
for x in range(env.grid_size[0]): #go over all x-coordinates
for y in range(env.grid_size[1]): #go over all y-coordinates
j=values[(x,y)]
action=policy[(x,y)]
nextstates_probs_rews=env.get_transition_probabilites_and_reward((x,y),action) #obtain possible next states, probabilities, rewards from environment
sum=0.0
for npr in nextstates_probs_rews: #go over all possible next states
sum+=npr[1]*(npr[2]+gamma*values[(npr[0][0],npr[0][1])]) #approach optimal value incrementally by using operator T
values[(x,y)]=sum
delta=max(delta,abs(values[(x,y)]-j)) #maximum value difference
extra_info['delta'].append(delta)
extra_info['time_local'].append(time_local)
time_local=0
done=1
for x in range(env.grid_size[0]): #go over all x-coordinates
for y in range(env.grid_size[1]): #go over all y-coordinates
b=policy[(x,y)]
temp_J=[]
for a in possible_actions: #go over all possible actions
nextstates_probs_rews=env.get_transition_probabilites_and_reward((x,y),a) #obtain possible next states, probabilities, rewards from environment
sum=0.0
for npr in nextstates_probs_rews: #go over all possible next states
sum+=npr[1]*(npr[2]+gamma*values[(npr[0][0],npr[0][1])]) #approach optimal value incrementally by using operator T
temp_J.append(sum)
policy[(x,y)]=possible_actions[np.argmax(np.array(temp_J))] #find maximum value amongst all action's values
if b!=policy[(x,y)]:
done=0
if len(extra_info['unequal_policy'])<time_global:
extra_info['unequal_policy'].append(0)
extra_info['unequal_policy'][time_global-1]+=1
extra_info['time_global'].append(time_global)
extra_info['brown_in'].append(values[state_brown_in])
extra_info['brown_out'].append(values[state_brown_out])
extra_info['grey_in'].append(values[state_grey_in])
extra_info['values'].append(values.copy())
# End code
# Do not change the number of output values
return {"Values": values, "Policy": policy}, extra_info
env = GridEnv_HW2(**base_kwargs)
res, extra_info = policy_iteration(env, 0.7)
# The rounding off is just for making print statement cleaner
print(np.flipud(np.round(res['Values'], decimals=2)))
print(np.flipud(res['Policy']))
'''
###used for plotting value convergence curves for value, policy iteration for 3 different states###
plt.plot(extra_info1['brown_in'],label='value iteration brown-in') #extra-info1 is from value iterationp
plt.plot(extra_info2['brown_in'],label='policy iteration brown-in') #extra-info2 is from policy iteration
plt.title('brown in-value vs iterations')
plt.plot(extra_info1['brown_out'],label='value iteration brown-out')
plt.plot(extra_info2['brown_out'],label='policy iteration brown-out')
plt.title('brown out-value vs iterations')
plt.plot(extra_info1['grey_in'],label='value iteration grey-in')
plt.plot(extra_info2['grey_in'],label='policy iteration grey-in')
plt.legend(loc="lower right")
plt.xlabel('iterations')
plt.ylabel('values')
plt.title('values vs iterations')
'''
'''
###used for plotting rms error over iterations for policy iteration###
opt_val=res['Values']
size_s=opt_val.size
rms_pi=[]
for v in extra_info['values']:
rms_pi.append(np.sqrt(np.sum(np.square(opt_val-v))/size_s))
plt.plot(rms_pi,label='policy iteration')
plt.legend(loc="upper right")
plt.xlabel('iterations')
plt.ylabel('rms error of values')
plt.title('rms error of values vs iterations')
'''
Task 2.3 - TD Lambda¶
Use the heuristic policy and implement TD lambda to find values on the gridworld
# The policy mentioned in the pdf to be used for TD lambda, do not modify this
def heuristic_policy(env, state):
goal = env.goal_location
dx = goal[0] - state[0]
dy = goal[1] - state[1]
if abs(dx) >= abs(dy):
direction = (np.sign(dx), 0)
else:
direction = (0, np.sign(dy))
for action, dir_val in env.actions.items():
if dir_val == direction:
target_action = action
break
return target_action
def td_lambda(env, lamda, seeds):
alpha = 0.5
gamma = 0.7
N = len(seeds)
# Usage of input_policy
# heuristic_policy(env, state) -> action
example_action = heuristic_policy(env, (1,2)) # Returns 'N' if goal is (9,9)
# Example of env.step
# env.step(state, action) -> Returns next_state, reward, is_terminal
# Initial values
values = np.zeros((10, 10))
es = np.zeros((10,10))
for episode_idx in range(N):
# Do not change this else the results will not match due to environment stochas
np.random.seed(seeds[episode_idx])
grey_in_loc = np.where(env.grid == 'grey_in')
state = grey_in_loc[0][0], grey_in_loc[1][0]
done = False
extra_info = {'delta':[],'values':[]} #Put your extra information needed for plots etc in this dictionary
while not done:
action = heuristic_policy(env, state)
ns, rew, is_terminal = env.step(state, action)
# env.step is already taken inside the loop for you,
# Don't use env.step anywhere else in your code
# Begin code here
delta = rew+gamma*values[ns]-values[state]
extra_info['delta'].append(delta)
es[state]+=1
for x in range(env.grid_size[0]): #go over all x-coordinates
for y in range(env.grid_size[1]): #go over all y-coordinates
values[(x,y)]+=alpha*delta*es[(x,y)]
es[(x,y)]=gamma*lamda*es[(x,y)]
state=ns
if is_terminal: #env._grid_state(state) == 'goal'
done = True
possible_actions=list(env.actions.keys())
policy,policy_index = np.empty((10, 10), object), np.empty((10, 10), object)
for x in range(env.grid_size[0]): #go over all x-coordinates
for y in range(env.grid_size[1]): #go over all y-coordinates
temp_J=[]
for a in possible_actions: #go over all possible actions
nextstates_probs_rews=env.get_transition_probabilites_and_reward((x,y),a) #obtain possible next states, probabilities, rewards from environment
sum=0.0
for npr in nextstates_probs_rews: #go over all possible next states
sum+=npr[1]*(npr[2]+gamma*values[(npr[0][0],npr[0][1])]) #approach optimal value incrementally by using operator T
temp_J.append(sum)
policy[(x,y)]=possible_actions[np.argmax(np.array(temp_J))] #find maximum value amongst all action's values
policy_index[(x,y)]=np.argmax(np.array(temp_J))
extra_info['policy']=policy
extra_info['policy_index']=policy_index
extra_info['values'].append(values.copy())
# End code
# Do not change the number of output values
return {"Values": values}, extra_info
env = GridEnv_HW2(**base_kwargs)
res, extra_info = td_lambda(env, lamda=1, seeds=np.arange(1000))
# The rounding off is just for making print statement cleaner
print(np.flipud(np.round(res['Values'], decimals=2)))
'''
###used for plotting rms error over different lambda for TD Lambda###
opt_val=res['Values']
size_s=opt_val.size
rms_pi=[]
for v in extra_info['values']:
rms_pi.append(np.sqrt(np.sum(np.square(opt_val-v))/size_s))
plt.plot(rms_pi,label='td lambda')
plt.legend(loc="upper right")
plt.xlabel('iterations')
plt.ylabel('rms error of values')
plt.title('rms error of values vs iterations')
'''
Task 2.4 - TD Lamdba for multiple values of $\lambda$¶
Ideally this code should run as is
# This cell is only for your subjective evaluation results, display the results as asked in the pdf
# You can change it as you require, this code should run TD lamdba by default for different values of lambda
lamda_values = np.arange(0, 100+5, 5)/100
td_lamda_results = {}
extra_info = {}
for lamda_idx in range(len(lamda_values)):
lamda=lamda_values[lamda_idx]
env = GridEnv_HW2(**base_kwargs)
td_lamda_results[lamda], extra_info[lamda] = td_lambda(env, lamda,
seeds=np.arange(1000))
print('lambda: ',lamda)
print('value grid result: ',td_lamda_results[lamda]['Values'])
if lamda:
print('sum of absolute difference between two consecutive value grids: ',sum(sum(abs(td_lamda_results[lamda_values[lamda_idx-1]]['Values']-td_lamda_results[lamda]['Values']))))
ax = sns.heatmap(td_lamda_results[lamda]['Values'], linewidth=0.5)
plt.title('value grid for lambda: %0.2f'%(lamda))
plt.show()
print('policy grid result: ',extra_info[lamda]['policy'])
ax = sns.heatmap(np.array(extra_info[lamda]['policy_index'],dtype=np.int32), linewidth=0.5)
plt.title('policy grid for lambda: %0.2f'%(lamda))
plt.show()
Generate Results ✅¶
def get_results(kwargs):
gridenv = GridEnv_HW2(**kwargs)
policy_iteration_results = policy_iteration(gridenv, 0.7)[0]
value_iteration_results = value_iteration(gridenv, 0.7)[0]
td_lambda_results = td_lambda(env, 0.5, np.arange(1000))[0]
final_results = {}
final_results["policy_iteration"] = policy_iteration_results
final_results["value_iteration"] = value_iteration_results
final_results["td_lambda"] = td_lambda_results
return final_results
# Do not edit this cell, generate results with it as is
if not os.path.exists(AICROWD_RESULTS_DIR):
os.mkdir(AICROWD_RESULTS_DIR)
for params_file in os.listdir(DATASET_DIR):
kwargs = np.load(os.path.join(DATASET_DIR, params_file), allow_pickle=True).item()
results = get_results(kwargs)
idx = params_file.split('_')[-1][:-4]
np.save(os.path.join(AICROWD_RESULTS_DIR, 'results_' + idx), results)
Check your score on the public data¶
This scores is not your final score, and it doesn't use the marks weightages. This is only for your reference of how arrays are matched and with what tolerance.
# Check your score on the given test cases (There are more private test cases not provided)
target_folder = 'targets'
result_folder = AICROWD_RESULTS_DIR
def check_algo_match(results, targets):
if 'Policy' in results:
policy_match = results['Policy'] == targets['Policy']
else:
policy_match = True
# Reference https://numpy.org/doc/stable/reference/generated/numpy.allclose.html
rewards_match = np.allclose(results['Values'], targets['Values'], rtol=3)
equal = rewards_match and policy_match
return equal
def check_score(target_folder, result_folder):
match = []
for out_file in os.listdir(result_folder):
res_file = os.path.join(result_folder, out_file)
results = np.load(res_file, allow_pickle=True).item()
idx = out_file.split('_')[-1][:-4] # Extract the file number
target_file = os.path.join(target_folder, f"targets_{idx}.npy")
targets = np.load(target_file, allow_pickle=True).item()
algo_match = []
for k in targets:
algo_results = results[k]
algo_targets = targets[k]
algo_match.append(check_algo_match(algo_results, algo_targets))
match.append(np.mean(algo_match))
return np.mean(match)
if os.path.exists(target_folder):
print("Shared data Score (normalized to 1):", check_score(target_folder, result_folder))
Display Results of TD lambda¶
Display Results of TD lambda with lambda values from 0 to 1 with steps of 0.05:
Optimal policies of TD lambda (for different lambda) can be calculated from value grid using the below expression: π(s) = argmax a ∈ A ∑(s'∈S) Pss'(a) [r(s,a,s') + γJ(s')]
Results of policy grid for different values of lambda plotted: eg. policy grid result for lambda=0:
[[0 3 0 0 0 2 0 0 0 3][2 0 0 0 0 1 1 0 0 3]
[1 1 0 0 0 1 1 0 0 3]
[1 1 0 0 0 0 1 0 0 3]
[1 1 0 1 1 1 1 1 0 0]
[1 1 0 0 0 1 1 1 0 0]
[1 1 1 0 0 1 0 0 1 0]
[1 1 1 1 0 1 1 0 0 0]
[0 2 2 1 1 1 1 1 0 0]
[0 1 2 2 2 2 2 0 1 0]] which will be same as:
policy grid result for lambda=0: [['N' 'W' 'N' 'N' 'N' 'S' 'N' 'N' 'N' 'W']
['S' 'N' 'N' 'N' 'N' 'E' 'E' 'N' 'N' 'W']
['E' 'E' 'N' 'N' 'N' 'E' 'E' 'N' 'N' 'W']
['E' 'E' 'N' 'N' 'N' 'N' 'E' 'N' 'N' 'W']
['E' 'E' 'N' 'E' 'E' 'E' 'E' 'E' 'N' 'N']
['E' 'E' 'N' 'N' 'N' 'E' 'E' 'E' 'N' 'N']
['E' 'E' 'E' 'N' 'N' 'E' 'N' 'N' 'E' 'N']
['E' 'E' 'E' 'E' 'N' 'E' 'E' 'N' 'N' 'N']
['N' 'S' 'S' 'E' 'E' 'E' 'E' 'E' 'N' 'N']
['N' 'E' 'S' 'S' 'S' 'S' 'S' 'N' 'E' 'N']]
- Results of value grid for different values of lambda plotted:
Subjective questions¶
2.a Value Iteration vs Policy Iteration¶
- Compare value iteration and policy iteration for states Brown in, Brown Out, Grey out and Grey In
A: For each of all 3 states, policy iteration and value iteration converges to almost the same upper-bound (for J) value as can be seen from the graph
- Which one converges faster and why
A: Policy iteration (PI) converges faster (close to only 5 iterations) than value iteration (VI) for all 3 states as can be infered from the graph. Generally VI converges faster as it is a contraction. Here we care mostly about finding the optimal policy (for PI, when it converges) which is the outer loop for policy iteration wheras we compute number of iterations for inner loop for value iteration which might bias the result towards PI as PI has a VI step in it (policy improvement step) but it is not taken into account for calculating number of iterations for PI (as it's a inner loop). We also have immediate value updates in PI (updating J(s) as and when it's new value is evaluated) instead of bulk updates (calculating all the new values H(s) and then updating J(s)=H(s)) but which doesn't bring in any time/iteration difference. Points brown-in, grey-in also have next states defined defined almost deterministically (very few possible next states), thus finding their policy becomes easier from the setting. Hence PI might have converged faster than VI.
2.b How changing $\lambda$ affecting TD Lambda¶
A: From observing the "display the results" plots of TD Lambda for different lambda, we find that for lambda close to 1, we have each state's value dependent on all it's neighbours (there is a spread, gradual color variation) wheras for lambda close to 0, values change abruptly sometimes (as compared to higher lambda's; observe states close to right bottom). Policy changes occur mostly in top right (for different lambda) and is less variant (more grouped around a state) but might be biased for less lambda and more variant (might be a better policy for sharp turns) policy for higher values of lambda.
As can be seen from the above graph, all TD Lambda's RMS error converges to 0 in almost the same number of iterations. But for lambda closer to 0 (TD(0) like case), the (average) RMS-error is higher as compared to other TD's which have lambda closer to 1. When lambda is closer to 1 (MCPE like case), the wiggle (or deviations from mean over all iterations) is higher as compared to lesser values of lambda. This arises because of the fact that TD(0) is a biased estimate whereas TD(1) is an unbiased but high variance estimate.
2.c Policy iteration error curve¶
Plot error curve of $J_i$ vs iteration $i$ for policy iteration
As can be seen from the curve, RMS error associated with J of PI quickly converges to 0.
2.d TD Lamdba error curve¶
Plot error curve of $J_i$ vs iteration $i$ for TD Lambda for $\lambda = [0, 0.25, 0.5, 0.75, 1]$
Explanation for this graph is stated in answer to part (2b)! As can be seen from the graph, all TD Lambda's RMS error converges to 0 in almost the same number of iterations. But for lambda closer to 0 (TD(0) like case), the (average) RMS-error is higher as compared to other TD's which have lambda closer to 1. When lambda is closer to 1 (MCPE like case), the wiggle (or deviations from mean over all iterations) is higher as compared to lesser values of lambda. This arises because of the fact that TD(0) is a biased estimate whereas TD(1) is an unbiased but high variance estimate.
Submit to AIcrowd 🚀¶
!DATASET_PATH=$AICROWD_DATASET_PATH aicrowd notebook submit --no-verify -c iit-m-rl-assignment-2-gridworld -a assets
Content
Comments
You must login before you can post a comment.