. Advertisement .
..3..
. Advertisement .
..4..
The difference between tf.nn.sparse_softmax_cross_entropy_with_logits vs tf.nn.softmax_cross_entropy_with_logits? I am learning about it. Can someone share with me your knowledge and experience?
Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.
Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
It’s convenient to have two separate functions that yield the same result.
The distinction is straightforward:
sparse_softmax_cross_entropy_with_logits
. Each label is an int in range[0, num_classes-1]
.softmax_cross_entropy_with_logits
.softmax_cross_entropy_with_logits
labels are the hottest form ofsparse_softmax_cross_entropy_with_logits
labels.Another minor distinction is that you can use -1 as a label with
sparse_softmax_cross_entropy_with_logits
to have loss0
on this label.To add to the accepted answer, I’d like to add two things that you can also find within TF documentation.
First:
Second: