225 views
1 votes
1 votes
You encounter a classification task, and after training your network on 20 samples, the training converges, but the training loss is remarkably high. You decide to train the same network on 10,000 examples to address this issue. Is your approach to fixing the problem correct?

A) Yes, increasing the amount of data will likely resolve the bias problem.

B) No, increasing the amount of data is unlikely to solve the bias problem.

C) Yes, increasing the amount of data will also reduce the bias of the model.

D) No, a better approach would be to keep the same model architecture and increase the learning rate.

1 Answer

1 votes
1 votes
No, increasing the amount of data is unlikely to solve the bias problem. The model is suffering from bias, and a more effective approach would be to decrease bias by, for example, adding more layers or learnable parameters. It's also possible that the training converged to a local optimum, so training longer, using a better optimizer, or restarting from a different initialization could be more fruitful.

Related questions

1 votes
1 votes
2 answers
4
rajveer43 asked Jan 29
240 views
Suppose a classifier predicts each possible class with equal probability. If there are 10 classes, what will the cross-entropy error be on a single example?$− log(10)$$...