Llm
I decided to vibe code, truly putting Anthropic’s Claude CLI in the driver’s seat. It was instructive!
Overview of generative approaches to image and video generation. Includes text-to-image and text-to-video tasks.
Complex prompts are better handled with multi-step logical answers. Reasoning models are still an LLM, but we add reasoning before the response is provided.
I’m running Ollama on a Macbook so shifting the default location that Ollama downloads massive models to is a necessity. It’s just one environment setting away.
Week 3 is all about agents. Workflows, tools, multi-step agents, and the protocols and frameworks involved.
RAFT (Retrieval-Augmented Fine-Tuning) confused me when I first heard about it in a training class. It’s a technique that combines both offline fine-tuning and runtime retrieval to improve the …
How to interpret the numbers used to describe LLMs.
Part two of the AI Engineering Course, focusing on adapting existing LLMs with post training work. It covers Retrieval Augmented Generation (RAG) and Fine Tuning.
Resources from Week 1 of the AI Engineering Course