Finding good lambda ( λ ) for regularization in a machine learning model is important, to avoid under-fitting (high bias) or over-fitting (high variance).
If lambda is too large, then all theta ( θ ) values will be penalized heavily. Hypothesis ( h ) tends to zero. (High bias, under-fitting).
If lambda is too small, that's similar to very small regularization. (High variance, over-fitting).
Cross validation set principle can be used to select good lambda based on the plot of errors vs lambda, for both training data and validation data.