@@ -179,9 +179,9 @@ The Deep Bayesian Bandits library includes the following algorithms (see the
7.**Monte Carlo Methods**.
8.**Bootstrapped Networks**. This algorithm trains simultaneously and in
parallel **q** neural networks based on different datasets . The way those datasets are collected is by adding each new collected
datapoint  to each dataset *D_i* independently and with
probability . Therefore, the main hyperparameters of the
parallel **q** neural networks based on different datasets D<sub>1</sub>, ..., D<sub>q</sub>. The way those datasets are collected is by adding each new collected
datapoint (X<sub>t</sub>, a<sub>t</sub>, r<sub>t</sub>) to each dataset *D<sub>i</sub>* independently and with
probability p ∈ (0, 1]. Therefore, the main hyperparameters of the
algorithm are **(q, p)**. In order to choose an action for a new context,
one of the **q** networks is first selected with uniform probability (i.e.,
*1/q*). Then, the best action according to the *selected* network is