Rendered at 14:43:49 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
RovaAI 1 days ago [-]
The deduplication/state-memory pattern maps well to any long-running agent. What I've found works: instead of a complex memory system, a simple append-only log of processed items with a last_seen timestamp is often enough. Lookup is fast with a sorted structure, and you can prune entries older than your recurrence window.
The hard part isn't storage — it's deciding what counts as "the same" item. For web research agents, URL identity isn't sufficient (pages change, same story, different URL). Content fingerprinting on normalized text (first N chars after stripping whitespace/HTML) turns out to be more reliable than URL equality.
Also worth noting: the failure mode you described (repeating mistakes) often comes from agents not distinguishing between "I haven't seen this" and "I saw this and it failed." Storing outcome alongside identity — even just success/failure — changes the behavior significantly. Retry logic becomes explicit instead of accidental.
robotmem 22 hours ago [-]
[dead]
robotmem 2 days ago [-]
Thanks for the feedback!
Results summary: Baseline heuristic policy achieves 42% success rate on FetchPush-v4. With memory augmentation
(recall past experiences before each episode), it reaches 67% — a +25pp improvement. Cross-environment transfer
from FetchPush to FetchSlide adds +8pp over baseline.
The API has 7 endpoints — the core loop is:
- learn(insight, context) — store what worked (or failed)
- recall(query) — retrieve relevant past experiences, ranked by text + vector + spatial similarity
- save_perception(data) — store raw trajectories/forces
- start_session / end_session — episode lifecycle with auto-consolidation
Everything runs on SQLite locally. No cloud, no GPU. Works via MCP (Model Context Protocol) or direct Python
import.
pip install robotmem — quick demo runs in 2 minutes.
sankalpnarula 22 hours ago [-]
Hey just curious. What happens when the memory gets large enough. Does it start creating problems with context windows?
DANmode 2 days ago [-]
Recommend providing a text summary of the comparison chart - and talking a bit about the API.
DANmode 2 days ago [-]
I’m going to say this has failed the Turing test based on the reply.
The hard part isn't storage — it's deciding what counts as "the same" item. For web research agents, URL identity isn't sufficient (pages change, same story, different URL). Content fingerprinting on normalized text (first N chars after stripping whitespace/HTML) turns out to be more reliable than URL equality.
Also worth noting: the failure mode you described (repeating mistakes) often comes from agents not distinguishing between "I haven't seen this" and "I saw this and it failed." Storing outcome alongside identity — even just success/failure — changes the behavior significantly. Retry logic becomes explicit instead of accidental.