Reference Models#

As described in the Introduction and FAQ pages, the PolyBlocks compiler can compile functions written in frameworks like PyTorch, JAX, and TensorFlow. The functions can be from any domain, whether deep learning or other scientific, engineering, data analytics, or high-performance computing domains. Good optimization can be expected as long as they are written using operators on dense tensors/matrices.

While it is hard to state applicability for successful compilation and high performance in general, to provide a sense of coverage for the AI and deep learning domain, below is a list of popular deep learning models from HuggingFace and torchvision for that have been tested with PolyBlocks and are known to compile successfully with PolyBlocks and validate against the standard runtime of PyTorch (eager) as well as torch.compile (Torch Inductor).

Many of these models are also available on the PolyBlocks Playground, and they are expected to compile and execute successfully through the Docker release. Any recent regressions are marked with a red cross.

HuggingFace Models#

Model

Status

BLOOM

ConvNext

DPT Large

GTE Small feature extraction

Google/Deplot

MPNet base v2

Mini-lm

MiniCPM

Mistral instruct

Moondream

owlvit-base-patch32

Query Wellformedness score

SqlCoder

Stable diffusion turbo Unet block

Stable diffusion image to image XL refiner

TableTransformer

XLM Roberta Base

YoloS

TorchVision Models#

Listed below are some TorchVision models that have been tested with PolyBlocks: they compile successfully and validate. Many also run significantly faster with PolyBlocks than with the Torch standard runtime or Torch Inductor.

Model

Status

AlexNet

DenseNet

EfficientNet

GoogleNet

Inception

MNasNet

MobileNetv3

ShuffleNet

SqueezeNet

HRNet

ResNet50

UNet

VGG19

VIT