AI TRAINING
Hugging Face 101: Open Models for Engineers
Confidently navigate Hugging Face Hub, deploy open models, and select the right model for real business tasks.
See if this training is the right one for your team — free diagnostic
Run the diagnostic →What it covers
This hands-on course introduces engineers to the Hugging Face ecosystem from the ground up. Participants learn to search, evaluate, and run open-source models using the Transformers library, deploy inference endpoints, and publish interactive demos with Spaces. By the end, attendees can make informed model-selection decisions aligned with business constraints such as latency, cost, and data privacy.
What you'll be able to do
- Load and run any model from Hugging Face Hub using Transformers pipelines in under 10 lines of Python
- Fine-tune a pre-trained text or vision model on a custom dataset using the Trainer API or PEFT/LoRA
- Deploy a model as a live REST endpoint using Hugging Face Inference Endpoints and call it from an application
- Build and publish a shareable Gradio demo on Hugging Face Spaces within a single session
- Evaluate and compare open models against business criteria (accuracy, latency, privacy, licensing) to make a justified selection
Topics covered
- Navigating the Hugging Face Hub: search, filters, model cards, and leaderboards
- Using the Transformers library: pipelines, tokenizers, and model loading
- Fine-tuning pre-trained models with Trainer API and PEFT/LoRA
- Deploying models via Hugging Face Inference Endpoints
- Building and publishing interactive demos with Gradio and Spaces
- Reading and writing model cards for documentation and governance
- Comparing open models vs. proprietary APIs on cost, latency, and privacy
- Quantisation basics: running models efficiently with bitsandbytes and GGUF
Delivery
Typically delivered as a 2-3 day in-person or live-virtual bootcamp with a 70/30 hands-on to instruction ratio. Each session includes guided lab notebooks hosted on Google Colab or a pre-configured cloud environment. Participants receive access to a shared Hugging Face organisation for collaboration. A take-home capstone project (deploying a task-specific model end-to-end) is included. Remote delivery works well; in-person adds value for team alignment discussions around model selection.
What makes it work
- Pairing each concept with a real internal use case so participants immediately see business relevance
- Establishing a shared team Space and model registry during training to build collaborative habits from day one
- Including a model-selection rubric workshop so engineers can justify open-model choices to non-technical stakeholders
- Following up with a 2-week async check-in where participants share their capstone results and blockers
Common mistakes
- Pulling the largest available model by default without checking inference cost, latency, or licence compatibility
- Skipping model cards and leaderboard context, leading to poor model-task fit in production
- Treating Hugging Face Inference Endpoints as a production-grade scalable solution without understanding cold-start and rate-limit constraints
- Ignoring quantisation options and attempting to run 7B+ parameter models on CPU, causing frustration and abandonment
When NOT to take this
Teams that have already standardised on a single proprietary LLM API (e.g., OpenAI GPT-4o) with no plans to self-host or fine-tune — the open-model tooling overhead adds complexity without payoff for them.
Providers to consider
Sources
Use cases this training unlocks
- AI-Assisted Code Generation and ReviewAccelerate software delivery by automating code suggestions, boilerplate generation, and PR security reviews.
- Intelligent Code Migration AssistantAccelerate codebase migrations between languages, frameworks, or architectures using generative AI.
- Automated Technical Documentation from CodeAutomatically generate and maintain technical documentation from source code and architecture decisions for engineering teams.
- Automated Bug Detection and ClassificationAutomatically detect, classify, and prioritize bugs so engineering teams fix what matters first.
- AI-Generated Test Cases and UI Regression DetectionAutomatically generate test cases from requirements and catch UI regressions for engineering teams.
- Multi-Modal AI Content ModerationAutomatically detect hate speech, violence, and misinformation across text, images, and video at scale.
Other trainings at this level
This training is part of a Data & AI catalog built for leaders serious about execution. Take the free diagnostic to see which trainings your team needs.