Instructions to use cccczshao/CALM-Autoencoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use cccczshao/CALM-Autoencoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="cccczshao/CALM-Autoencoder")# Load model directly from transformers import Autoencoder model = Autoencoder.from_pretrained("cccczshao/CALM-Autoencoder", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use cccczshao/CALM-Autoencoder with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "cccczshao/CALM-Autoencoder" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "cccczshao/CALM-Autoencoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/cccczshao/CALM-Autoencoder
- SGLang
How to use cccczshao/CALM-Autoencoder with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "cccczshao/CALM-Autoencoder" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "cccczshao/CALM-Autoencoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "cccczshao/CALM-Autoencoder" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "cccczshao/CALM-Autoencoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use cccczshao/CALM-Autoencoder with Docker Model Runner:
docker model run hf.co/cccczshao/CALM-Autoencoder
Continuous Autoregressive Language Models
Model Description
Modern Large Language Models (LLMs) are constrained by a fundamental bottleneck: they generate text one token at a time. CALM (Continuous Autoregressive Language Models) confronts this challenge by introducing a paradigm shift in language modeling. Instead of predicting one discrete token at a time, CALM learns to predict a single continuous vector that represents an entire chunk of K tokens.
This is achieved through a two-stage process:
- A high-fidelity autoencoder learns to compress K tokens into a single vector and reconstruct them with near-perfect accuracy.
- A continuous-domain language model then performs autoregressive prediction in this vector space.
Key Features
🚀 Ultra-Efficient by Design: Dramatically improves training and inference efficiency by reducing the number of autoregressive steps by a factor of K.
💡 A New Scaling Axis: Introduces a new scaling dimension for LLMs—semantic bandwidth (K). Instead of just scaling parameters and data, you can now scale the amount of information processed in a single step.
🛠️ A Comprehensive Likelihood-Free Toolkit: Operating in a continuous domain requires new tools. This repository provides the full suite of algorithms that make CALM possible:
- A Robust Autoencoder to learn high-fidelity continuous representations of token chunks.
- Energy-Based Training, a principled and likelihood-free method for generative modeling.
- BrierLM, a new metric for calibrated, likelihood-free evaluation of language models.
- Temperature Sampling for controlled, high-quality text generation using only a black-box sampler.
How to use
See our GitHub README, where we provide scripts for training and evaluation.
Contact
If you have any questions, feel free to submit an issue or contact chenzeshao@tencent.com.
- Downloads last month
- 1,224