Reference Models#

As described in the Introduction and FAQ pages, the PolyBlocks compiler can compile functions written in frameworks like PyTorch, JAX, and TensorFlow. The functions can be from any domain, whether deep learning or other scientific, engineering, data analytics, or high-performance computing domains. Good optimization can be expected as long as they are written using operators on dense tensors/matrices.

While it is hard to state applicability for successful compilation and high performance in general, to provide a sense of coverage for the AI and deep learning domain, below is a list of popular deep learning models from HuggingFace and torchvision for that have been tested with PolyBlocks and are known to compile successfully with PolyBlocks and validate against the standard runtime of PyTorch (eager) as well as torch.compile (Torch Inductor).

Many of these models are also available on the PolyBlocks Playground, and they are expected to compile and execute successfully through the Docker release. Any recent regressions are marked with a red cross.

HuggingFace Models#

Model

Status

BLOOM

ConvNext

DETR

DPT Large

Flux

Gemma-2-2b

GTE Small feature extraction

Google/Deplot

Llama 3.1 8B

Llama 3.2 1B Instruct

MPNet base v2

MPT-7B

Mini-lm

MiniCPM

Mistoline

Mistral instruct

Moondream

nanoLLaVA

owlvit-base-patch32

Query Wellformedness score

SqlCoder

Stable diffusion 3.5

Stable diffusion turbo Unet block

Stable diffusion XL base 1.0

Stable diffusion image to image XL refiner

Starling-LM-7B-alpha

TableTransformer

XLM Roberta Base

YoloS

Zephyr 7B Alpha

TorchVision Models#

Listed below are some TorchVision models that have been tested with PolyBlocks: they compile successfully and validate. Many also run significantly faster with PolyBlocks than with the Torch standard runtime or Torch Inductor.

Model

Status

AlexNet

DenseNet

EfficientNet

GoogleNet

Inception

MNasNet

MobileNetv3

ShuffleNet

SqueezeNet

HRNet

ResNet50

UNet

VGG19

VIT