Knowledgator Docs
GitHubDiscord
  • 🛎️Welcome
  • ⚙️Models
    • 🧮Comprehend-it
      • Comprehend_it-base
      • Comprehend_it-multilingual-t5-base
    • 🦎UTC
  • 👷Frameworks
    • 💧LiqFit
      • Quick Start
      • Benchmarks
      • API Reference
        • Collators
          • NLICollator
          • Creating custom collator
        • Datasets
          • NLIDataset
        • Losses
          • Focal Loss
          • Binary Cross Entropy
          • Cross Entropy Loss
        • Modeling
          • LiqFitBackbone
          • LiqFitModel
        • Downstream Heads
          • LiqFitHead
          • LabelClassificationHead
          • ClassClassificationHead
          • ClassificationHead
        • Pooling
          • GlobalMaxPooling1D
          • GlobalAbsAvgPooling1D
          • GlobalAbsMaxPooling1D
          • GlobalRMSPooling1D
          • GlobalSumPooling1D
          • GlobalAvgPooling1D
          • FirstTokenPooling1D
        • Models
          • Deberta
          • T5
        • Pipelines
          • ZeroShotClassificationPipeline
  • 📚Datasets
    • Biotech news dataset
  • 👩‍🔧Support
  • API Reference
    • Comprehend-it API
    • Entity extraction
      • /fast
      • /deterministic
      • /advanced
    • Token searcher
    • Web2Meaning
    • Web2Meaning2
    • Relation extraction
    • Text2Table
      • /web2text
      • /text_preprocessing
      • /text2table
      • /merge_tables
Powered by GitBook
On this page
  1. Frameworks
  2. LiqFit
  3. API Reference
  4. Losses

Cross Entropy Loss

PreviousBinary Cross EntropyNextModeling

Last updated 1 year ago

class liqfit.losses.CrossEntropyLoss

(multi_target=False, weight=None, ignore_index=-100, reduction='mean',
label_smoothing=0.0)

Parameters:

  • multi_target (bool, optional): Whether the labels are multi-target or not.

  • weight (torch.Tensor, optional): Manual rescaling weight given to the loss of each batch element.

  • reduction (str, optional): Reduction method to apply on the loss. (Defaults to "mean").

  • ignore_index (int): Index that will be ignored while calculating the loss.

  • label_smoothing (float, optional): A float in [0.0, 1.0]. Specifies the amount of smoothing when computing the loss, where 0.0 means no smoothing. The targets become a mixture of the original ground truth and a uniform distribution as described in . Default: 0.0.

Using CrossEntropyLoss

Simple wrapper over PyTorch's cross_entropy loss function to support multi-target inputs

from liqfit.losses import CrossEntropyLoss
import torch

x = torch.randn((1, 10, 20))
y = torch.randint(0, 2, (1, 10))
ce_loss = CrossEntropyLoss(multi_target=True)
loss = ce_loss(x, y) # No need for reshaping.
👷
💧
Rethinking the Inception Architecture for Computer Vision