Small experiments. Working code.
Tiny self-contained demos - running entirely in your browser, no server round-trips. Each one is a bite-sized sketch of something bigger in my daily work.
Simulate a Thought-Action-Observation loop - the core pattern behind LangChain agents and tool-using LLMs. Type a question and watch the agent "reason" through tools step by step.
Assemble a structured LLM prompt from parts. Pick a role, task, format, and constraints - see the full prompt ready to copy. This is how production prompts are engineered.
Rough BPE-style token count for any text. LLMs don't see words - they see tokens. Paste your prompt and see why a 4K context window fills up faster than you expect.
Classic character-level Markov chain trained on a tiny corpus of ML research abstracts. Pick an order (1 = chaotic · 3 = coherent) and generate something that sounds like a paper. It won't be one.
Lexicon-based scoring of whatever you type. Not a transformer - just a surprising amount of mileage from simple word matching.
A tiny shell that answers questions about me. Type help to see what it knows.
Paste comma-separated numbers. It computes mean, median, standard deviation, min, max, and count - live, as you type.
Roll three d6 a thousand times. Watch the distribution converge on the expected mean of 10.5 - the law of large numbers in motion.
for your team?