Models
Last updated
Last updated
We are focused on developing encoder-based models, we find encoder models to be extremely efficient for information extraction (IE) tasks, where contextual understanding and flexibility make this approach for IE a gold choice. We've got it covered in our .
Models were trained to work in zero-shot and few-shot learning settings, so they could be efficiently fine-tuned with minimal data. It refers to the ability of ML models to learn from a minimal set of data. Unlike traditional ML models that require large datasets to learn effectively, few-shot learning models are designed to make accurate predictions with fewer examples. Read more in our . Each model is capable of performing vast information extraction types, and depending on your final use case different model would be a better fit.
Text classification model designed for multi-label text classification tasks, with output scoring. Works in zero and few-shot learning settings. Could be used for:
Text classification
Reranking of search results
Named-entity classification
Relation classification
Entity linking
Q&A
Encoder-based token classification model capable of working with a wide range of popular IE tasks. The model is prompt-based and is seamlessly configurable through prompts. Supported IE tasks:
Named-entity recognition (NER)
Relation extraction
Summarization
Q&A
Text cleaning
Coreference resolution