Lab

Small experiments. Working code.

Tiny self-contained demos - running entirely in your browser, no server round-trips. Each one is a bite-sized sketch of something bigger in my daily work.

Experiment · 01 · Agentic AI
ReAct agent loop

Simulate a Thought-Action-Observation loop - the core pattern behind LangChain agents and tool-using LLMs. Type a question and watch the agent "reason" through tools step by step.

Agent idle. Press Run agent to start the ReAct loop.
Experiment · 02 · GenAI
Prompt builder

Assemble a structured LLM prompt from parts. Pick a role, task, format, and constraints - see the full prompt ready to copy. This is how production prompts are engineered.

Press Build to generate your prompt.
Experiment · 03 · GenAI
Token estimator

Rough BPE-style token count for any text. LLMs don't see words - they see tokens. Paste your prompt and see why a 4K context window fills up faster than you expect.

-
tokens
-
words
-
chars
-
tok/word
GPT-4 context: 128K tokens · Claude: 200K tokens · average English ~1.3 tok/word
Experiment · 04 · ML
Markov word machine

Classic character-level Markov chain trained on a tiny corpus of ML research abstracts. Pick an order (1 = chaotic · 3 = coherent) and generate something that sounds like a paper. It won't be one.

Press Generate to synthesize a fake research sentence.
Experiment · 05 · NLP
Sentiment meter

Lexicon-based scoring of whatever you type. Not a transformer - just a surprising amount of mileage from simple word matching.

- Negative0Positive +
-
Experiment · 06
Mini terminal

A tiny shell that answers questions about me. Type help to see what it knows.

lok@portfolio ~ $ hello
Hey. I'm a little CLI pretending to know Lok. Try help, stack, or certs.
lok@portfolio ~ $ 
Experiment · 07 · Stats
Number cruncher

Paste comma-separated numbers. It computes mean, median, standard deviation, min, max, and count - live, as you type.

-
mean
-
median
-
std dev
-
min
-
max
-
n
Experiment · 08 · Stats
Monte Carlo dice

Roll three d6 a thousand times. Watch the distribution converge on the expected mean of 10.5 - the law of large numbers in motion.

?
?
?
-
0 rolls · expected mean 10.5
Have an idea?
Want one of these
for your team?
Let's talk →