Four patterns,
one brokerage.
Static, declarative, open-ended, composed — four ways the LLM drives UI, plus an 11-component brokerage catalogue that exercises every pattern. Powered by Gemini · 20 verified prompts · 4 themes · one droplet chat.
1 · Prompt reference
Every prompt below has been verified end-to-end. Filter by pattern, search, then click any icon to copy a prompt and paste it into the droplet chat.
20 of 20 prompts · 5 categories. Hover any row, click the copy icon, paste into the droplet chat to fire it verbatim.
2 · Static generative UI
The component is fixed in source. The LLM only decides when to render it.
3 · Declarative generative UI
Same component, same schema — the LLM fills symbol, price, changePercent, blurb. Two themes exposed as separate actions.
See the 2 Declarative prompts in the reference table above →
4 · Open-ended generative UI
Wildcard render (name: "*") catches any tool call the model dreams up. The renderer auto-detects shape: scalars → metric grid, list-of-objects → table, anything else → JSON fallback. Everything follows the global theme.
See the 2 Open-ended prompts in the reference table above →
Both use the bridge tool renderDynamicPayload. If Gemini ever replies with raw JSON instead of a card, re-ask including “via renderDynamicPayload” in the prompt.
5 · Brokerage catalogue (11 declarative components)
A stock brokerage / advisory assistant. The sidebar can render any of the 11 components below via tool calls filled by Gemini. The wildcard open-ended renderer (section 4) still catches anything the model invents outside this list.
6 · Composed responses
Tool calls don't live alone. Veda can wrap rendered cards with markdown — context before, interpretation after — so the visual answers what and the prose answers so what. The runtime streams text and tool calls in order; no extra plumbing required.
📝 → ▢Veda writes 1–2 sentences of markdown context, then calls a tool whose render() output appears inline below the prose.
📝 → ▢ → 📝Same as A, plus 1–2 sentences of analysis after the card renders. Useful when the visual answers WHAT and the trailing prose answers SO WHAT.
How it works: when the LLM emits content like <text> <tool_call> <text> , the chat UI renders each chunk as it streams — markdown bubble, then the tool'srender() output, then the trailing markdown bubble. The composition is purely a prompt concern.