Session 1: Coding with LLMs

Learning Objectives:


Recap: From Theory to Practice

From Session 0, we learned:

Today: How to use this knowledge in code


Why Program with LLMs?


Chatbots like ChatGPT are designed around human conversations

They were the first “killer apps” for LLMs


Chatbots interact with LLMs programmatically by sending inputs and receiving outputs


We can do the same thing in our code and environments

iow, we can “cut out the middle man” chatbot and use LLMs directly


LLM Development Workflow

  1. Identify LLM provider that provides an API
  2. Generate API key
  3. Choose LLM library/tools
  4. Send LLM input, integrate output

Part 1: Identify LLM provider


LLM Providers and APIs


Standard API characteristics

curl https://api.anthropic.com/v1/models \ 
    -H "Content-Type: application/json" \
    -H "x-api-key: $(llm keys get anthropic)" \
    -H "anthropic-version: 2023-06-01"  | jq

{
  "data": [
    {
      "type": "model",
      "id": "claude-opus-4-1-20250805",
      "display_name": "Claude Opus 4.1",
      "created_at": "2025-08-05T00:00:00Z"
    },
    {
      "type": "model",
      "id": "claude-opus-4-20250514",
      "display_name": "Claude Opus 4",
      "created_at": "2025-05-22T00:00:00Z"
    }
  ]
}

APIs accept and return JSON formatted text

{
    "messages": [
        {
            "content": "Hello, how are you?",
            "role": "user"
        }
    ]
}

The structure of the input and responses is now (mostly) standardized across providers


API endpoints defined in provider docs


Part 2: Generate API Key


Most APIs require a key to authenticate

curl https://api.anthropic.com/v1/models \
    -H "Content-Type: application/json" \
    -H "x-api-key: sk-ant-IjcOV..." \ # <- API key
    -H "anthropic-version: 2023-06-01"  | jq

Super secure API key generation demo


.env secrets file convention

ANTHROPIC_KEY=sk-ant-IjcOVgUYbqqdoLPP3UAbGRyODx
OPENAI_KEY=sk-proj-1234567890abcdef1234567890ab

.env secrets file convention cont’d

echo ANTHROPIC_API_KEY=sk-ant-IjcoV... >> .env
set -a
source .env
set +a

curl https://api.anthropic.com/v1/models \
    -H "Content-Type: application/json" \
    -H "x-api-key: $ANTHROPIC_API_KEY" \
    -H "anthropic-version: 2023-06-01"  | jq

Choose LLM library


Software libraries wrap API calls


LLMs in python

from litellm import completion
import os

## ANTHROPIC_API_KEY must be set in environment
response = completion(
  model="anthropic/claude-4-sonnet-20250514",
  messages=[{
    "content": "Hello, how are you?",
    "role": "user"
  }]
)
print(response.json())

Response

{"choices": [{"finish_reason": "stop",
      "index": 0,
      "message": {"content": "Hello! I'm doing well, thank you for "
                             "asking. I'm here and ready to help with "
                             "whatever you'd like to discuss or work "
                             "on. How are you doing today?",
                  "function_call": None,
                  "role": "assistant",
                  "tool_calls": None}}],
 "created": 1758822587,
 "id": "chatcmpl-e02204e5-5c20-47dd-b6c5-f8f1c8f7a5ba",
 "model": "claude-sonnet-4-20250514",
 "object": "chat.completion",
 ...

...
 "system_fingerprint": None,
 "usage": {"cache_creation_input_tokens": 0,
           "cache_read_input_tokens": 0,
           "completion_tokens": 39,
           "completion_tokens_details": None,
           "prompt_tokens": 13,
           "prompt_tokens_details": {"audio_tokens": None,
                 "cache_creation_token_details": {
                     "ephemeral_1h_input_tokens": 0,
                      "ephemeral_5m_input_tokens": 0
                  },
                 "cache_creation_tokens": 0,
                 "cached_tokens": 0,
                 "image_tokens": None,
                 "text_tokens": None},
           "total_tokens": 52 } }

Workshop setup


Key Takeaways

  1. APIs enable programmatic access to LLMs via HTTP endpoints
  2. API keys provide authentication and must be kept secure
  3. litellm library simplifies working with multiple LLM providers
  4. JSON format is standard for API requests and responses
  5. Environment variables (.env files) safely store credentials
  6. LLM responses include usage metadata (tokens, costs)

Theory ↔ Practice Connections

Theory: LLMs process tokens and generate responses

Practice: API calls send/receive these tokens as JSON


Theory: Context windows limit input size

Practice: Must monitor token usage in API responses


Theory: Different models have different capabilities

Practice: litellm provides unified interface across providers


Looking Ahead: Session 2

Next topic: Advanced Context Engineering & Prompt Patterns

Building on today:

Now that we can call LLMs from code, we’ll learn:

Demo code: lectures/demos/session_1/


Questions?

Next session: Advanced Context Engineering & Prompt Patterns