LiteHRNet: Optimized for Qualcomm Devices

LiteHRNet is a machine learning model that detects human pose and returns a location and confidence for each of 17 joints.

This is based on the implementation of LiteHRNet found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.

Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.

Getting Started

There are two ways to deploy this model on your device:

Option 1: Download Pre-Exported Models

Below are pre-exported model assets ready for deployment.

Runtime Precision Chipset SDK Versions Download
ONNX float Universal QAIRT 2.37, ONNX Runtime 1.23.0 Download
QNN_DLC float Universal QAIRT 2.42 Download
TFLITE float Universal QAIRT 2.42, TFLite 2.17.0 Download

For more device-specific assets and performance metrics, visit LiteHRNet on Qualcomm® AI Hub.

Option 2: Export with Custom Configurations

Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:

  • Custom weights (e.g., fine-tuned checkpoints)
  • Custom input shapes
  • Target device and runtime configurations

This option is ideal if you need to customize the model beyond the default configuration provided here.

See our repository for LiteHRNet on GitHub for usage instructions.

Model Details

Model Type: Model_use_case.pose_estimation

Model Stats:

  • Input resolution: 256x192
  • Number of parameters: 1.11M
  • Model size (float): 4.49 MB

Performance Summary

Model Runtime Precision Chipset Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit
LiteHRNet ONNX float Snapdragon® X Elite 5.871 ms 4 - 4 MB NPU
LiteHRNet ONNX float Snapdragon® 8 Gen 3 Mobile 3.412 ms 0 - 175 MB NPU
LiteHRNet ONNX float Qualcomm® QCS8550 (Proxy) 5.591 ms 0 - 158 MB NPU
LiteHRNet ONNX float Qualcomm® QCS9075 6.235 ms 1 - 4 MB NPU
LiteHRNet ONNX float Snapdragon® 8 Elite For Galaxy Mobile 2.928 ms 0 - 149 MB NPU
LiteHRNet ONNX float Snapdragon® 8 Elite Gen 5 Mobile 2.763 ms 0 - 149 MB NPU
LiteHRNet QNN_DLC float Snapdragon® X Elite 2.393 ms 1 - 1 MB NPU
LiteHRNet QNN_DLC float Snapdragon® 8 Gen 3 Mobile 1.347 ms 0 - 110 MB NPU
LiteHRNet QNN_DLC float Qualcomm® QCS8275 (Proxy) 4.887 ms 1 - 80 MB NPU
LiteHRNet QNN_DLC float Qualcomm® QCS8550 (Proxy) 2.065 ms 1 - 118 MB NPU
LiteHRNet QNN_DLC float Qualcomm® SA8775P 2.662 ms 1 - 82 MB NPU
LiteHRNet QNN_DLC float Qualcomm® QCS9075 2.487 ms 1 - 3 MB NPU
LiteHRNet QNN_DLC float Qualcomm® QCS8450 (Proxy) 2.915 ms 0 - 107 MB NPU
LiteHRNet QNN_DLC float Qualcomm® SA7255P 4.887 ms 1 - 80 MB NPU
LiteHRNet QNN_DLC float Qualcomm® SA8295P 3.449 ms 0 - 82 MB NPU
LiteHRNet QNN_DLC float Snapdragon® 8 Elite For Galaxy Mobile 1.024 ms 1 - 85 MB NPU
LiteHRNet QNN_DLC float Snapdragon® 8 Elite Gen 5 Mobile 0.847 ms 1 - 84 MB NPU
LiteHRNet TFLITE float Snapdragon® 8 Gen 3 Mobile 2.684 ms 0 - 159 MB NPU
LiteHRNet TFLITE float Qualcomm® QCS8275 (Proxy) 8.657 ms 0 - 119 MB NPU
LiteHRNet TFLITE float Qualcomm® QCS8550 (Proxy) 4.25 ms 0 - 12 MB NPU
LiteHRNet TFLITE float Qualcomm® SA8775P 5.259 ms 0 - 119 MB NPU
LiteHRNet TFLITE float Qualcomm® QCS9075 5.046 ms 0 - 10 MB NPU
LiteHRNet TFLITE float Qualcomm® QCS8450 (Proxy) 5.332 ms 0 - 139 MB NPU
LiteHRNet TFLITE float Qualcomm® SA7255P 8.657 ms 0 - 119 MB NPU
LiteHRNet TFLITE float Qualcomm® SA8295P 6.273 ms 0 - 117 MB NPU
LiteHRNet TFLITE float Snapdragon® 8 Elite For Galaxy Mobile 2.217 ms 0 - 122 MB NPU
LiteHRNet TFLITE float Snapdragon® 8 Elite Gen 5 Mobile 2.017 ms 0 - 122 MB NPU

License

  • The license for the original implementation of LiteHRNet can be found here.

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for qualcomm/LiteHRNet