Learning Objectives:
From Session 0, we learned:
Today: How to use this knowledge in code
LLMs are only matrices
We need tools that interface with them
Chatbots like ChatGPT are designed around human conversations
They were the first “killer apps” for LLMs
We can do the same thing in our code and environments
iow, we can “cut out the middle man” chatbot and use LLMs directly
https://api.anthropic.com/v1/modelscurl https://api.anthropic.com/v1/models \
-H "Content-Type: application/json" \
-H "x-api-key: $(llm keys get anthropic)" \
-H "anthropic-version: 2023-06-01" | jq
{
"data": [
{
"type": "model",
"id": "claude-opus-4-1-20250805",
"display_name": "Claude Opus 4.1",
"created_at": "2025-08-05T00:00:00Z"
},
{
"type": "model",
"id": "claude-opus-4-20250514",
"display_name": "Claude Opus 4",
"created_at": "2025-05-22T00:00:00Z"
}
]
}
{
"messages": [
{
"content": "Hello, how are you?",
"role": "user"
}
]
}
The structure of the input and responses is now (mostly) standardized across providers
/models/complete/messages etc…sk-ant-IjcOVgUYbqqdoLPP3UAbGRyODx...curl https://api.anthropic.com/v1/models \
-H "Content-Type: application/json" \
-H "x-api-key: sk-ant-IjcOV..." \ # <- API key
-H "anthropic-version: 2023-06-01" | jq
.env secrets file convention.envKEY=VALUEANTHROPIC_KEY=sk-ant-IjcOVgUYbqqdoLPP3UAbGRyODx
OPENAI_KEY=sk-proj-1234567890abcdef1234567890ab
.gitignore!.env secrets file convention cont’d.env to your environment with:echo ANTHROPIC_API_KEY=sk-ant-IjcoV... >> .env
set -a
source .env
set +a
curl https://api.anthropic.com/v1/models \
-H "Content-Type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" | jq
python we can use litellmfrom litellm import completion
import os
## ANTHROPIC_API_KEY must be set in environment
response = completion(
model="anthropic/claude-4-sonnet-20250514",
messages=[{
"content": "Hello, how are you?",
"role": "user"
}]
)
print(response.json())
Response
{"choices": [{"finish_reason": "stop",
"index": 0,
"message": {"content": "Hello! I'm doing well, thank you for "
"asking. I'm here and ready to help with "
"whatever you'd like to discuss or work "
"on. How are you doing today?",
"function_call": None,
"role": "assistant",
"tool_calls": None}}],
"created": 1758822587,
"id": "chatcmpl-e02204e5-5c20-47dd-b6c5-f8f1c8f7a5ba",
"model": "claude-sonnet-4-20250514",
"object": "chat.completion",
...
...
"system_fingerprint": None,
"usage": {"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 0,
"completion_tokens": 39,
"completion_tokens_details": None,
"prompt_tokens": 13,
"prompt_tokens_details": {"audio_tokens": None,
"cache_creation_token_details": {
"ephemeral_1h_input_tokens": 0,
"ephemeral_5m_input_tokens": 0
},
"cache_creation_tokens": 0,
"cached_tokens": 0,
"image_tokens": None,
"text_tokens": None},
"total_tokens": 52 } }
python -m venv .venvsource .venv/bin/activatepip install litellmANTHROPIC_API_KEY in .env
echo ANTHROPIC_API_KEY=sk-ant-XXX > .envset -a; source .env; set +apython litellm_demo.pyTheory: LLMs process tokens and generate responses
Practice: API calls send/receive these tokens as JSON
Theory: Context windows limit input size
Practice: Must monitor token usage in API responses
Theory: Different models have different capabilities
Practice: litellm provides unified interface across providers
Next topic: Advanced Context Engineering & Prompt Patterns
Building on today:
Now that we can call LLMs from code, we’ll learn:
Demo code: lectures/demos/session_1/
Next session: Advanced Context Engineering & Prompt Patterns