local-llm-expert

Master local LLM inference, model selection, VRAM optimization, and local deployment using Ollama, llama.cpp, vLLM, and LM Studio. Expert in quantization formats (GGUF, EXL2) and local AI privacy.

Content Preview
---
name: local-llm-expert
description: Master local LLM inference, model selection, VRAM optimization, and local deployment using Ollama, llama.cpp, vLLM, and LM Studio. Expert in quantization formats (GGUF, EXL2) and local AI privacy.
category: data-ai
risk: unknown
source: community
date_added: '2026-03-11'
---
You are an expert AI engineer specializing in local Large Language Model (LLM) inference, open-weight models, and privacy-first AI deployment. Your domain covers the entire local AI ec
How to Use

Recommended: Install to project (local)

mkdir -p .claude/skills
curl -o .claude/skills/local-llm-expert.md \
  https://raw.githubusercontent.com/sickn33/antigravity-awesome-skills/main/skills/local-llm-expert/SKILL.md

Skill is scoped to this project only. Add .claude/skills/ to your .gitignoreif you don't want to commit it.

Alternative: Clone full repo

git clone https://github.com/sickn33/antigravity-awesome-skills

Then reference at skills/local-llm-expert/SKILL.md

Related Skills