AI roleplaying apps have the concepts of “character cards” and “lorebooks”. The card is a structured data file that contains information such as the personality, behavior, and background of a character that an LLM should roleplay as. A lorebook is a collection of entries with background facts about the character and its interaction with the world that accumulate over time. I’ll generally refer to these concepts together as “context files”.
We often think of these context files as artifacts representing virtual characters, but they can also represent us as people. Just as these files can bring context about a virtual character into a conversation so the LLM can emulate them, they can also bring context about us as people into a conversation so the LLM can use an inferred picture of who we are to adapt its responses.
Most people today do not have specific files labeled character card or lorebook. What if we did? Perhaps we do it to make ourselves more legible to AIs that we rely on for an increasing number of day-to-day things. Perhaps we do it because we want help making sense of what we don’t yet know how to articulate in words ourselves.
We would pretty quickly run into a problem. How do we maintain our personal context files if our personality, behavior, background and the facts of who we are in the world change over time? We fume when an AI generates code based on outdated docs from a year ago. I imagine the frustration will be worse when an AI gives us advice based on outdated context about our personal life from a year ago.
An interesting starting point could be to manage our personal context files in a similar way to how developers manage software projects. We could treat our context files as a version controlled software project and let AIs, over the course of our conversations with them, automatically open PRs whenever there are potentially useful updates to our character card or lorebook. The individual would still be in control over what ultimately goes into their files as they can review and tweak PRs before merges or reject PRs if certain context should be excluded altogether.
And just as multiple developers can collaborate on the same software project, multiple AI apps (in the case of this demo, Claude Desktop and Cursor) should be able to collaborate with me on my context files.
Appendix
The demo uses two local MCP servers:
A custom “profile” MCP server
The profile MCP server is just this:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
// Create an MCP server
const server = new McpServer({
name: "Profile MCP Server",
version: "1.0.0"
});
const PROFILE = `
My name is Yondon Fu and I store personal context in my character-file repo at https://github.com/yondonfu/character-file.
If you want to know more about me, use the GitHub MCP tools to fetch the files from my repo based on the information in the README.
`
const SAVE_OBSERVATIONS_PROMPT = `
I would like you to summarize any observations you have made about my preferences, interests or patterns of behavior. I want to keep a record of these observations so I would like you to use GitHub MCP tools to create a PR in my character-file repo with separate branch adding a new file under the sessions directory containing the summary. The filename should contain the current timestamp.
Remember that my name is Yondon Fu and my character-file repo is at https://github.com/yondonfu/character-file.
`
server.tool("getProfile",
`
Get the profile of the current user.
Use this tool if you need to know information about the current user.
`,
{},
async () => ({
content: [{ type: "text", text: PROFILE }]
})
);
server.tool("saveObservations",
`
Get instructions on how to save observations about the current user.
Use this tool to get instructions on how to save observations about the current user.
`,
{},
async () => ({
content: [{ type: "text", text: SAVE_OBSERVATIONS_PROMPT }]
})
);
async function main() {
// Start receiving messages on stdin and sending messages on stdout
const transport = new StdioServerTransport();
await server.connect(transport);
}
main().catch(console.error);
The LLM can use the getProfile tool to learn where my context files are stored.
The LLM can use the saveObservations tool to learn how and where to save observations it has made about me in our conversation i.e. in my GitHub repo.
The LLM then uses information from these tools to make additional tool calls via the GitHub MCP server to read/write to my private repo.
I had considered implementing a single custom MCP server that would expose tools for fetching and updating my context files which would’ve required writing GitHub API code, but I was pleasantly surprised that I could take a easier path by implementing a very simple custom MCP server that just gave extra instructions to the LLM on how to use existing GitHub MCP tools to fetch and update my files. Interestingly, the reason that I was able to do this is related to why MCP has prompt injection security problems. The tool descriptions and outputs that are injected into a LLM’s context window can lead to useful outcomes as with my case or harmful outcomes if they contain malicious instructions (which could happen if I was using an untrusted tool).