Introducing HuggingFace integration

3 min read

TL;DR

Blaxel now integrates with HuggingFace, allowing you to connect to and deploy AI models (public, private, or gated) directly through the platform. Set up is easy with a HuggingFace access token, and you get full control over model deployment settings and endpoint management.

Our HuggingFace integration enables you to connect to serverless endpoints from HuggingFace—whether they're public, gated, or private—directly through your agents on Blaxel. But that's not all! This integration is bidirectional, meaning you can create new deployments on HuggingFace right from the Blaxel console.

Key Features That Make This Integration Shine

  • Universal Access: Connect to any public model from HuggingFace's Inference API (serverless)
  • Private Model Support: Access private models from Inference Endpoints (dedicated) within your authorized organizations
  • Seamless Deployment: Create and manage HuggingFace Inference Endpoints directly from the Blaxel console
  • Dedicated Endpoints: Get your own global Blaxel endpoint for each model integration
Blog image

🔧 Getting Started is Simple

Setting up the integration is straightforward: just register a HuggingFace access token in your Blaxel workspace settings. The scope of this token determines what resources Blaxel can access on HuggingFace, including which public and private models to connect to.

When creating a new deployment, you have complete control over:

  • Organization selection for endpoint deployment
  • Model selection from your available options
  • Instance type and size configuration
  • Custom endpoint naming

Working with Gated Models

Need access to gated models? 🔐 No problem! Request access on HuggingFace first, accept any applicable terms and conditions, and you're good to go. Some models grant immediate access, while others may require manual approval.

🎉 The Future of AI Development

Developer experience is at the heart of everything we build at Blaxel. Our HuggingFace integration exemplifies this commitment, providing developers with powerful model deployment and management tools that make integrating AI capabilities faster and more reliable than ever before.

Our integration-first approach sets a new standard for AI model deployment. By making HuggingFace's extensive model ecosystem easily accessible, we're reshaping how teams build and deploy AI applications. This is just the beginning of our journey to simplify AI integration.

Have questions about our HuggingFace integration? We'd love to hear from you! Drop us a line and let us know how you're using this integration to enhance your AI workflows.