• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Friday, April 3, 2026
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Machine Learning

I Changed Vector DBs with Google’s Reminiscence Agent Sample for my notes in Obsidian

Admin by Admin
April 3, 2026
in Machine Learning
0
Gemini generated image y59dgdy59dgdy59d scaled 1.jpg
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

What Occurs Now That AI is the First Analyst On Your Crew?

Learn how to Make Claude Code Higher at One-Shotting Implementations


This began as a result of my Obsidian assistant saved getting amnesia. I didn’t wish to get up Pinecone or Redis simply so Claude might do not forget that Alice accredited the Q3 finances final week. Seems, with 200K+ context home windows, you won’t want any of that.

I wish to share a brand new mechanism that I’ve began operating. It’s a system constructed on SQLite and direct LLM reasoning, no vector databases, no embedding pipeline. Vector search was principally a workaround for tiny context home windows and protecting prompts from getting messy. With fashionable context sizes, you possibly can typically skip that and simply let the mannequin learn your reminiscences immediately.


The Setup

I take detailed notes, each in my private life and at work. I used to scrawl in notebooks that might get misplaced or get caught on a shelf and by no means be referenced once more. A number of years in the past, I moved to Obsidian for every thing, and it has been implausible. Within the final 12 months, I’ve began hooking up genAI to my notes. Right this moment I run each Claude Code (for my private notes) and Kiro-CLI (for my work notes). I can ask questions, get them to do roll-ups for management, observe my objectives, and write my experiences. However it’s at all times had one massive Achilles’ heel: reminiscence. After I ask a couple of assembly, it makes use of an Obsidian MCP to go looking my vault. It’s time-consuming, error-prone, and I would like it to be higher.

The apparent repair is a vector database. Embed the reminiscences. Retailer the vectors. Do a similarity search at question time. It really works. However it additionally means a Redis stack, a Pinecone account, or a regionally operating Chroma occasion, plus an embedding API, plus pipeline code to sew all of it collectively. For a private instrument, that’s lots, and there’s a actual threat that it received’t work precisely like I would like it to. I have to ask, what occurred on ‘Feb 1 2026’ or ‘recap the final assembly I had with this individual’, issues that embeddings and RAG aren’t nice with.

Then I ran throughout Google’s always-on-memory agent https://github.com/GoogleCloudPlatform/generative-ai/tree/primary/gemini/brokers/always-on-memory-agent. The concept is fairly easy: don’t do a similarity search in any respect; simply give the LLM your current reminiscences immediately and let it purpose over them.

I needed to know if that held up on AWS Bedrock with Claude Haiku 4.5. So I constructed it (together with Claude Code, after all) and added in some further bells and whistles.

Go to my GitHub repo, however make sure that to come back again!

https://github.com/ccrngd1/ProtoGensis/tree/primary/memory-agent-bedrock


An Perception That Adjustments the Math

Older fashions topped out at 4K or 8K tokens. You couldn’t match various paperwork in a immediate. Embeddings allow you to retrieve the related paperwork with out loading every thing. That was genuinely vital. Haiku 4.5 presents a context window of 250k, so what can we do with that?

A structured reminiscence (abstract, entities, matters, significance rating) runs about 300 tokens. Which implies we are able to get about 650 reminiscences earlier than you hit the ceiling. In observe, it’s a bit much less because the system immediate and question additionally eat tokens, however for a private assistant that tracks conferences, notes, and conversations, that’s months of context.

No embeddings, no vector indexes, no cosine similarity.

The LLM causes immediately over semantics, and it’s higher at that than cosine similarity.


The Structure

The orchestrator isn’t a separate service. It’s a Python class contained in the FastAPI course of that coordinates the three brokers.

The IngestAgent job is straightforward: take uncooked textual content and ask Haiku what’s value remembering. It extracts a abstract, entities (names, locations, issues), matters, and an significance rating from 0 to 1. That bundle goes into the `reminiscences` desk.

The ConsolidateAgent runs with clever scheduling: at startup if any reminiscences exist, when a threshold is reached (5+ reminiscences by default), and day by day as a compelled cross. When triggered, it batches unconsolidated reminiscences and asks Haiku to seek out cross-cutting connections and generate insights. Outcomes land in a `consolidations` desk. The system tracks the final consolidation timestamp to make sure common processing even with low reminiscence accumulation.

The QueryAgent reads current reminiscences plus consolidation insights right into a single immediate and returns a synthesized reply with quotation IDs. That’s the entire question path.


What Truly Will get Saved

Once you ingest textual content like “Met with Alice at present. Q3 finances is accredited, $2.4M,” the system doesn’t simply dump that uncooked string into the database. As a substitute, the IngestAgent sends it to Haiku and asks, “What’s essential right here?”

The LLM extracts structured metadata:

{
  "id": "a3f1c9d2-...",
  "abstract": "Alice confirmed Q3 finances approval of $2.4M",
  "entities": ["Alice", "Q3 budget"],
  "matters": ["finance", "meetings"],
  "significance": 0.82,
  "supply": "notes",
  "timestamp": "2026-03-27T14:23:15.123456+00:00",
  "consolidated": 0
}

The reminiscences desk holds these particular person data. At ~300 tokens per reminiscence when formatted right into a immediate (together with the metadata), the theoretical ceiling is round 650 reminiscences in Haiku’s 200K context window. I deliberately set the default to be 50 current reminiscences, so I’m effectively wanting that ceiling.

When the ConsolidateAgent runs, it doesn’t simply summarize reminiscences. It causes over them. It finds patterns, attracts connections, and generates insights about what the reminiscences imply collectively. These insights get saved as separate data within the consolidations desk:

{
  "id": "3c765a26-...",
  "memory_ids": ["a3f1c9d2-...", "b7e4f8a1-...", "c9d2e5b3-..."],
  "connections": "All three conferences with Alice talked about finances issues...",
  "insights": "Price range oversight seems to be a recurring precedence...",
  "timestamp": "2026-03-27T14:28:00.000000+00:00"
}

Once you question, the system masses each the uncooked reminiscences *and* the consolidation insights into the identical immediate. The LLM causes over each layers directly, together with current info plus synthesized patterns. That’s the way you get solutions like “Alice has raised finances issues in three separate conferences [memory:a3f1c9d2, memory:b7e4f8a1] and the sample suggests this can be a excessive precedence [consolidation:3c765a26].”

This two-table design is the whole persistence layer. A single SQLite file. No Redis. No Pinecone. No embedding pipeline. Simply structured data that an LLM can purpose over immediately.

What the Consolidation Agent Truly Does

Most reminiscence methods are purely retrieval. They retailer, search, and return comparable textual content. The consolidation agent works otherwise; It reads a batch of unconsolidated reminiscences and asks, “What connects these?”, “What do these have in frequent?”, “How do these relate?”

These insights get written as a separate consolidations document. Once you question, you get each the uncooked reminiscences and the synthesized insights. The agent isn’t simply recalling. It’s reasoning.

The sleeping mind analogy from the unique Google implementation appear fairly correct. Throughout idle time, the system is processing moderately than simply ready. That is one thing I typically battle with when constructing brokers: how can I make them extra autonomous in order that they’ll work after I don’t, and this can be a good use of that “downtime”.

For a private instrument, this issues. “You’ve had three conferences with Alice this month, and all of them talked about finances issues” is extra helpful than three particular person recall hits.

The unique design used a easy threshold for consolidation: it waited for five reminiscences earlier than consolidating. That works for energetic use. However should you’re solely ingesting sporadically, a word right here, a picture there, you would possibly wait days earlier than hitting the brink. In the meantime, these reminiscences sit unprocessed, and queries don’t profit from the consolidation agent’s sample recognition.

So, I made a decision so as to add two extra triggers. When the server begins, it checks for unconsolidated reminiscences from the earlier session and processes them instantly. No ready. And on a day by day timer (configurable), it forces a consolidation cross if something is ready, no matter whether or not the 5-memory threshold has been met. So even a single word per week nonetheless will get consolidated inside 24 hours.

The unique threshold-based mode nonetheless runs for energetic use. However now there’s a security web beneath it. Should you’re actively ingesting, the brink catches it. Should you’re not, the day by day cross does. And on restart, nothing falls by the cracks.

File Watching and Change Detection

I’ve an Obsidian vault with lots of of notes, and I don’t wish to manually ingest every one. I wish to level the watcher on the vault and let it deal with the remaining. That’s precisely what this does.

On startup, the watcher scans the listing and ingests every thing it hasn’t seen earlier than. It runs two modes within the background: a fast scan each 60 seconds checks for brand new information (quick, no hash calculation, simply “is that this path within the database?”), and a full scan each half-hour, calculates SHA256 hashes, and compares them to saved values. If a file has modified, the system deletes the previous reminiscences, cleans up any consolidations that referenced them, re-ingests the brand new model, and updates the monitoring document. No duplicates. No stale knowledge.

For private word workflows, the watcher covers what you’d anticipate:

  • Textual content information (.txt, .md, .json, .csv, .log, .yaml, .yml)
  • Photographs (.png, .jpg, .jpeg, .gif, .webp), analyzed through Claude Haiku’s imaginative and prescient capabilities
  • PDFs (.pdf), textual content extracted through PyPDF2

Recursive scanning and listing exclusions are configurable. Edit a word in Obsidian, and inside half-hour, the agent’s reminiscence displays the change.


Why No Vector DB

Whether or not you want embeddings in your private notes boils down to 2 issues: what number of notes you could have and the way you wish to search them.

Vector search is genuinely vital when you could have tens of millions of paperwork and might’t match the related ones in context. It’s a retrieval optimization for large-scale issues.

At private scale, you’re working with lots of of reminiscences, not tens of millions. Vector means you’re operating an embedding pipeline, paying for the API calls, managing the index, and implementing similarity search to resolve an issue {that a} 200K context window already solves.

Right here’s how I take into consideration the tradeoffs:

Complexity
Accuracy
Scale

I couldn’t justify having to setup and preserve a vector database, even FAISS for the few notes that I generate.

On prime of that, this new technique offers me higher accuracy for the best way I would like to go looking my notes.


Seeing It In Motion

Right here’s what utilizing it truly appears to be like like. Configuration is dealt with through a .env file with wise defaults. You possibly can copy of the instance immediately and begin utilizing it (assuming you could have run aws configure on you’re machine already).

cp .env.instance .env

Then, begin the server with the file watcher energetic

./scripts/run-with-watcher.sh

CURL the /ingest endpoint with to check a pattern ingestion. That is choice, simply to show the way it works. You possibly can skip this should you’re establishing in an actual use case.

-H "Content material-Kind: utility/json" 
-d '{"textual content": "Met with Alice at present. Q3 finances is accredited, $2.4M.", "supply": "notes"}'

The response will seem like

{
  "id": "a3f1c9d2-...",
  "abstract": "Alice confirmed Q3 finances approval of $2.4M.",
  "entities": ["Alice", "Q3 budget"],
  "matters": ["finance", "meetings"],
  "significance": 0.82,
  "supply": "notes"
}

To question it later CURL the question endpoint with

question?q=What+did+Alice+say+about+the+finances

Or use the CLI:

python cli.py ingest "Paris is the capital of France." --source wikipedia
python cli.py question "What are you aware about France?"
python cli.py consolidate  # set off manually
python cli.py standing       # see reminiscence depend, consolidation state

Making It Helpful Past CURL

curl works, however you’re not going to twist your reminiscence system at 2 am when you could have an thought, so the challenge has two integration paths.

Claude Code / Kiro-CLI talent. I added a local talent that auto-activates when related. Say “do not forget that Alice accredited the Q3 finances” and it shops it with out you needing to invoke something. Ask “what did Alice say concerning the finances?” subsequent week, and it checks reminiscence earlier than answering. It handles ingestion, queries, file uploads, and standing checks by pure dialog. That is how I work together with the reminiscence system most frequently, since I are inclined to stay in CC/Kiro more often than not.

CLI. For terminal customers or scripting

python cli.py ingest "Paris is the capital of France." --source wikipedia

python cli.py question "What are you aware about France?"

python cli.py consolidate

python cli.py standing

python cli.py checklist --limit 10

The CLI talks to the identical SQLite database, so you possibly can combine API, CLI, and talent utilization interchangeably. Ingest from a script, question from Claude Code, and verify standing from the terminal. All of it hits the identical retailer.


What’s Subsequent

The excellent news, the system works, and I’m utilizing it at present, however listed below are a couple of additions it may benefit from.

Significance-weighted question filtering. Proper now, the question agent reads the N most up-to-date reminiscences. Which means previous however essential reminiscences can get pushed out by current noise. I wish to filter by significance rating earlier than constructing the context, however I’m undecided but how aggressive to be. I don’t need a high-importance reminiscence from two months in the past to vanish simply because I ingested a bunch of assembly notes this week.

Metadata filtering. Equally, since every reminiscence has related metadata, I might use that metadata to filter out reminiscences which can be clearly incorrect. If I’m asking questions on Alice, I don’t want any reminiscences that solely contain Bob or Charlie. For my use case, this could possibly be primarily based on my word hierarchy, since I maintain notes aligned to prospects and/or particular initiatives.

Delete and replace endpoints. The shop is append-only proper now. That’s high-quality till you ingest one thing incorrect and want to repair it. DELETE /reminiscence/{id} is an apparent hole. I simply haven’t wanted it badly sufficient but to construct it.

MCP integration. Wrapping this as an MCP server would let any Claude-compatible consumer use it as persistent reminiscence. That’s most likely the highest-lift factor on this checklist, but it surely’s additionally probably the most work.


Strive It

The challenge is up on GitHub as a part of an ongoing collection I began, the place I implement analysis papers, discover modern concepts, and repurpose helpful instruments for bedrock (https://github.com/ccrngd1/ProtoGensis/tree/primary/memory-agent-bedrock).

It’s Python with no unique dependencies, simply boto3, FastAPI, and SQLite.

The default mannequin is `us.anthropic.claude-haiku-4-5-20251001-v1:0` (Bedrock cross-region inference profile), configurable through .env.

A word on safety: the server has no authentication by default; it’s designed for native use. Should you expose it on a community, add auth first. The SQLite database will include every thing you’ve ever ingested, so deal with it accordingly (chmod 600 reminiscence.db is an effective begin).

Should you’re constructing private AI tooling and stalling on the reminiscence drawback, this sample is value a glance. Let me know should you resolve to attempt it out, the way it works for you, and which challenge you’re utilizing it on.


About

Nicholaus Lawson is a Answer Architect with a background in software program engineering and AIML. He has labored throughout many verticals, together with Industrial Automation, Well being Care, Monetary Companies, and Software program corporations, from start-ups to massive enterprises.

This text and any opinions expressed by Nicholaus are his personal and never a mirrored image of his present, previous, or future employers or any of his colleagues or associates.

Be at liberty to attach with Nicholaus through LinkedIn at https://www.linkedin.com/in/nicholaus-lawson/

Tags: AgentDBsGooglesMemoryNotesObsidianPatternReplacedVector

Related Posts

Alex knight 2ejcsulrwc8 unsplash scaled 1.jpg
Machine Learning

What Occurs Now That AI is the First Analyst On Your Crew?

April 2, 2026
Image 350.jpg
Machine Learning

Learn how to Make Claude Code Higher at One-Shotting Implementations

April 1, 2026
Mlm everything you need to know about recursive language models 1024x572.png
Machine Learning

All the things You Must Know About Recursive Language Fashions

March 31, 2026
Self healing neural network tree illustration.jpg
Machine Learning

Self-Therapeutic Neural Networks in PyTorch: Repair Mannequin Drift in Actual Time With out Retraining

March 30, 2026
Mlm ipc why agents fail 1 1024x571.png
Machine Learning

Why Brokers Fail: The Function of Seed Values and Temperature in Agentic Loops

March 29, 2026
Image 68.png
Machine Learning

A Newbie’s Information to Quantum Computing with Python

March 28, 2026
Next Post
0 cmnhchp03eo5g19u.jpg

DenseNet Paper Walkthrough: All Related

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
Chainlink Link And Cardano Ada Dominate The Crypto Coin Development Chart.jpg

Chainlink’s Run to $20 Beneficial properties Steam Amid LINK Taking the Helm because the High Creating DeFi Challenge ⋆ ZyCrypto

May 17, 2025
Image 100 1024x683.png

Easy methods to Use LLMs for Highly effective Computerized Evaluations

August 13, 2025
Blog.png

XMN is accessible for buying and selling!

October 10, 2025
0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025

EDITOR'S PICK

Bitcoin Id De2e8488 5515 4bae A6be Dc54d0c7b905 Size900.jpg

Bitcoin Will get Authorized Backing in Pennsylvania as Home Passes Crypto Invoice

October 27, 2024
Bala pathlib apps v1.png

Manage, Search, and Again Up Information with Python’s Pathlib

August 5, 2024
Kdn mayo 10 surprising things python time module.png

10 Shocking Issues You Can Do with Python’s time module

August 3, 2025
Depositphotos 155416250 Xl Scaled.jpg

5 Causes Stay Stream Manufacturing Suppliers Are Utilizing AI

October 28, 2024

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • DenseNet Paper Walkthrough: All Related
  • I Changed Vector DBs with Google’s Reminiscence Agent Sample for my notes in Obsidian
  • XRP posts longest shedding streak since 2014, shedding over 55%
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?