Classification performance analysis regarding margin of energy-based model

Myeong K. Kang, Kyo H. Park, Seong W. Kim, Min J. Kim, Sang C. Lee

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

When you train an energy-based model, it is important to set a good margin. However, it is almost impossible to train a good margin using stochastic gradient descent (SDG). Because the margin will saturate to zero while minimizing the cost function. For that reason, we usually set the margin to a non-trainable scalar to penalize offending answers linearly to more apart than a certain distance. A good performance setting relates to the length of margin and dimension to which the feature being mapped. In this paper, we will show that a large margin does not always lead to a better performance and affirm the well-tuned margin can achieve better results.

Original languageEnglish
Title of host publicationProceedings of 2017 International Conference on Industrial Design Engineering, ICIDE 2017
PublisherAssociation for Computing Machinery
Pages34-37
Number of pages4
ISBN (Electronic)9781450348669
DOIs
StatePublished - 29 Dec 2017
Event2017 International Conference on Industrial Design Engineering, ICIDE 2017 - Dubai, United Arab Emirates
Duration: 28 Dec 201731 Dec 2017

Publication series

NameACM International Conference Proceeding Series

Conference

Conference2017 International Conference on Industrial Design Engineering, ICIDE 2017
Country/TerritoryUnited Arab Emirates
CityDubai
Period28/12/1731/12/17

Bibliographical note

Publisher Copyright:
© 2017 Association for Computing Machinery.

Keywords

  • Energy based models
  • Hinge loss
  • Margin
  • Negativelog-likelihood loss
  • Triplet loss

Fingerprint

Dive into the research topics of 'Classification performance analysis regarding margin of energy-based model'. Together they form a unique fingerprint.

Cite this