I’ve always struggled with note management. I want an easy way to jot something down and find it later, but when I actually need the information, it’s never organized in a way that makes extraction simple.
Take achievement tracking, for example. If you keep a continuous log of all your notes but later need a quarterly summary of big wins, you’re basically stuck doing an O(n) scan of your own brain dump. Sure, you could pre-structure your notes, but then recording becomes a chore—and we all know what happens to systems that are painful to use. They get abandoned faster than a New Year’s gym membership.
So I took a page from my own architectural playbook: when in doubt, turn a human problem into a software problem—because software problems can be automated.
The plan was familiar territory (and something I’ve written about before in the context of documentation management): embed my notes, drop them in a vector store, and use RAG (Retrieval-Augmented Generation) to solve the search problem.
First Attempt: RAG to the Rescue
My first crack at it was textbook RAG:
def query_notes(question):
query_embedding = embed_text(question)
relevant_notes = vector_store.similarity_search(query_embedding, k=5)
context = "\n".join([note.content for note in relevant_notes])
prompt = f"Context: {context}\n\nQuestion: {question}"
return llm.call(prompt)
Simple: embed the notes, retrieve relevant ones, and pass them into an LLM. And—it worked!
I wrapped it in a CLI because, let’s be honest, I didn’t want to build a UI until I had the core functionality nailed down. (Also, real work happens in terminals. That’s just science.)
This bit is important for later on. 😉
Refactoring Notes
I started with my usual structure: a weekly log with important information copied forward. But it felt clunky—like trying to teach a fish to climb a tree. That structure kept me sane, but AI didn’t need it.
So I restructured into 10 separate files, each holding a full history for its category:
- action_items.md: Completed and outstanding tasks
- contacts.md: People I’ve met and relevant context
- meetings.md: Notes with outcomes and follow-ups
- thoughts.md: Random insights and ideas
- drafts.md: Work-in-progress content
- …
Each file followed a lightweight structure. For example:
# Action Items
## Outstanding
- [2024-01-15] Follow up with Jane about the database migration timeline
- Context: Discussed during architecture review
- Priority: High
- Dependencies: Database team capacity planning
## Completed
- [2024-01-10] ✅ Review security audit findings
- Completed: 2024-01-12
- Outcome: Three critical issues identified and assigned
This approach required two decisions from AI: (1) determine which category a note belonged to, and (2) update the correct file.
I added a CLI command to handle note additions: AI would classify the input, read the appropriate file, retrieve relevant context from the vector store, and write the update.
And it worked brilliantly. I no longer needed the weekly structure—just drop in notes and let AI handle the rest.
But… I was still orchestrating it all manually.
The Golden Rule
My next pain point came from the CLI. I’d find myself defaulting to asking questions, but most of my usage was actually just adding notes. So I had to enter a command every time to specify what kind of input I was giving. Annoying.
Which led me to the golden rule of software development: if you’re doing something three times, it’s time to re-evaluate what you’re actually doing.
I was writing a lot of logic just to “let AI decide what to do.”
So naturally, I asked: why am I writing all this orchestration logic myself?
ReAct: Reasoning + Acting
I’d worked with AI agents before and wasn’t new to the concept of ReAct (Reasoning and Acting). But sometimes you have to feel the pain before a solution really clicks.
ReAct combines reasoning and action in a loop until the problem is solved—basically what humans do every day. Think → try → evaluate → repeat.
The “Act” part of ReAct is powered by tools—functions the agent can invoke. Without tools, an agent is just a fancy Q&A bot. With tools, it becomes a true problem solver.
So instead of:
- Calling AI to classify a note
- Calling it again to determine the right category
- Calling it again to update the content
…I just gave the agent a problem: “Add an action item to follow up with Jane…”
The agent thinks: “Okay, I probably need to read the action_items file. Let me check it out. Got it. Now let’s update it with the new task. Cool. Let’s write the updated file back.”
The AI reasons through each step. My only job? Provide the tools.
So I pivoted. I stopped writing orchestration code and started letting AI do the orchestration for me.
From Orchestration Mess to Agent Simplicity
What I had before:
def handle_input(command, entry):
is_valid(command, entry)
if command == "/q":
return handle_question(user_input)
elif command == "/n":
return handle_note(user_input)
...
def handle_note(user_input):
category = classify_category(user_input)
file_content = file_service.read_category_file(category)
context = vector_service.get_relevant_context(user_input)
return update_notes(file_content, user_input, context)
What I have now (thanks to LangGraph’s create_react_agent doing the heavy lifting):
@tool("read_notes")
def read_notes_tool(category: str) -> str:
"""Read notes from a specific category file."""
return file_service.read_file(f"{category}.md")
# Define all tools
tools = [read_notes_tool, update_notes_tool, search_confluence_tool, get_metrics_tool]
# Let the agent orchestrate everything
agent = create_react_agent(llm, tools)
def handle_input(entry):
is_valid(command, entry)
response = agent.invoke({"input": user_input})
That’s it.
No more branching logic for each user action. No more workflow management. Just tools and intent.
Hyperautomation: Exponential, Not Linear
This approach fundamentally changed how I think about automation. Each new tool doesn’t just add linear functionality – it creates exponential possibilities because AI automatically discovers how to combine them in ways I may not have anticipated
I added a Confluence API tool, and suddenly the agent could look up documentation. I’m adding Datadog API access, and soon it’ll be able to fetch metrics and generate graphs on demand. AI can combine tools in ways I might not even anticipate.
The End of Orchestration Logic?
This experience revealed something profound: AI can completely eliminate the need for orchestration logic in applications.
Traditional automation requires us to:
- Define every step explicitly
- Handle edge cases and retries
- Maintain state and dependencies
- Update logic every time requirements shift
With AI agents, the new process is:
- Define tools
- Describe your goal
- Let AI figure out the rest
The orchestration layer—often 40–80% of automation code—just… disappears.
The best part? Emergent behavior. You might expect the agent to use tools A and B. But once you add C, it may discover that A + C is a better combo—and just start doing that. No code changes needed.
Where to Next?
What started as a note-taking experiment revealed a broader shift in how we build systems.
We’re moving from programming workflows to defining capabilities.
This isn’t just for personal productivity—it has massive implications for DevOps pipelines, data workflows, and enterprise automation. I can’t count how many hours I’ve spent with engineering teams designing workflows, handling retries, building compensating transactions… now I wonder how much of that could be solved with reasoning, not rules.
In my mind, this changes how we think about software development at its core.
But I’ll leave those implications for another post. For now, I’m off to let my AI agent schedule that follow-up with Jane.
Leave a Reply