nyu-mll/glue
Viewer • Updated • 1.49M • 458k • 495
How to use zeroMN/SHMT with Transformers:
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("zeroMN/SHMT", dtype="auto")Evolutionary Multi-Modal Model, is a multimodal transformer designed to handle a variety of tasks including vision and audio processing. It is built on top of the adapter-transformers and transformers libraries and is intended to be a versatile base model for both direct use and fine-tuning. --
from ucimlrepo import fetch_ucirepo
fetch dataset
breast_cancer_wisconsin_original = fetch_ucirepo(id=15)
data (as pandas dataframes)
X = breast_cancer_wisconsin_original.data.features
y = breast_cancer_wisconsin_original.data.targets
metadata
print(breast_cancer_wisconsin_original.metadata)
variable information
print(breast_cancer_wisconsin_original.variables)
Evolutionary Multi-Modal Model, is a multimodal transformer designed to handle a variety of tasks including vision and audio processing. It is built on top of the adapter-transformers and transformers libraries and is intended to be a versatile base model for both direct use and fine-tuning.
git lfs install
git clone https://huggingface.co/zeroMN/SHMT.git
The model can be fine-tuned for specific tasks such as visual question answering (VQA), image captioning, and audio recognition.
The Evolved Multimodal Model is not suitable for tasks that require high expertise or domain-specific expertise beyond its current capabilities. The number of speech frames still needs to be fine-tuned by yourself.
Users (both direct and downstream) should be made aware of the following risks, biases, and limitations:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="zeroMN/SHMT")
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("zeroMN/SHMT")