A grid search is an exhaustive search through a manually-specified subset of the hyperparameter space. The grid search algorithm requires a performance metric, such as cross-validation error or validation-set error to evaluate the best possible parameter. Typically, a grid search involves selecting parameters in logarithmic scale. For example, a learning rate taken within the set {0.1, 0.01, 0.001, 0.0001} or a number of hidden units can be chosen from the set {50, 100, 200, 500, 1000, ...}. The computational cost for the grid search grows exponentially with the number of hyperparameters. So, another popular technique is the randomized grid search. The randomized search samples parameters a fixed number of times from all the specified ranges of parameters. This is found to be more effective than an exhaustive search when we have high-dimensional hyperparameter spaces. It is better because there could be some hyperparameters that do not significantly affect the loss.