AWQ
conceptQuantization Method
Try in Playground →RSS
Overview
Developed byMIT Han Lab
LicenseMIT License
Open source✓ Open Source
Use case4-bit quantization of large language models
Also see
Alternative to
Knowledge graph stats
Claims18
Avg confidence93%
Avg freshness100%
Last updatedUpdated 2 days ago
Trust distribution
100% unverified
Governance

AWQ

concept

Activation-aware Weight Quantization method for efficient 4-bit quantization of LLMs preserving important weights.

Compare with...

published year

ValueTrustConfidenceFreshnessSources
2023UnverifiedHighFresh1

full name

ValueTrustConfidenceFreshnessSources
Activation-aware Weight QuantizationUnverifiedHighFresh1

quantization bits

ValueTrustConfidenceFreshnessSources
4-bitUnverifiedHighFresh1

primary use case

ValueTrustConfidenceFreshnessSources
4-bit quantization of large language modelsUnverifiedHighFresh1
Large Language Model QuantizationUnverifiedHighFresh1

developed by

ValueTrustConfidenceFreshnessSources
MIT Han LabUnverifiedHighFresh1
MITUnverifiedHighFresh1

open source

ValueTrustConfidenceFreshnessSources
trueUnverifiedHighFresh1

implemented by

ValueTrustConfidenceFreshnessSources
AutoAWQUnverifiedHighFresh1
TinyChatUnverifiedModerateFresh1

supports model

ValueTrustConfidenceFreshnessSources
LLaMAUnverifiedHighFresh1
OPTUnverifiedHighFresh1

alternative to

ValueTrustConfidenceFreshnessSources
GPTQUnverifiedHighFresh1
SmoothQuantUnverifiedHighFresh1

license type

ValueTrustConfidenceFreshnessSources
MIT LicenseUnverifiedHighFresh1

integrates with

ValueTrustConfidenceFreshnessSources
PyTorchUnverifiedHighFresh1
Hugging Face TransformersUnverifiedModerateFresh1
vLLMUnverifiedModerateFresh1

Alternatives & Similar Tools

alternative to
Compare →

Commonly Used With

Related entities

Claim count: 18Last updated: 4/8/2026Edit history