每天开机不用动脑,AI自动认出你——这操作堪比作弊还不用坐牢
每天早晨我打开笔记本。手指还没碰到键盘。Claude已经知道我是谁。知道我在忙什么项目。知道我用的每一个加密工具。知道我所有没做完的任务。知道我写过哪些文章。知道我抓过的每一个点子。这不是聊天机器人。这是第二个大脑。这个大脑从来不睡觉。这个大脑从来不忘记东西。这个大脑你用得越久它就越聪明。你说这算不算作弊。
你说这种操作合法吗。我花五分钟搭了个例子。你看截图。Obsidian文件夹结构清清楚楚。Claude配置就几行字。然后系统自己跑起来了。每天早上七点半。我还没起床。AI已经读完新文章。更新了知识库。把新旧知识串起来。还把矛盾的地方标出来。我坐到电脑前。打开Obsidian。看到的不是一个空白的编辑器。而是一个活的、在成长的、完全属于我的知识世界。这感觉就像请了个免费员工。这个员工不领工资不请假不抱怨。而且一天比一天能干。
你也能这周就搞定
先说下载Obsidian。你去官网点两下。安装程序就下来了。双击运行。下一步下一步。完成。整个过程不到两分钟。Obsidian是个本地笔记软件。它不把数据传上云端。所有文件都在你电脑里。纯文本格式。Markdown语法。你不用担心隐私泄露。也不用担心哪天软件倒闭数据拿不出来。
然后你要创建第二个大脑。在Obsidian里这叫知识库。你可以随便起名字。我的就叫Leo的工作区。这个名字你开心就好。知识库就是一个文件夹。里面装的全是.md文件。你用记事本都能打开。Obsidian只是给你一个好用的界面。让你能看到笔记之间的连接图。能快速跳转。能双向链接。但这套系统的核心不是软件。是里面的内容怎么组织、怎么更新。
接下来下载Claude Code。如果你还没有桌面版。去官网下载安装。Claude Code是Anthropic出的命令行工具。你可以在终端里跟Claude对话。让它读写你电脑里的文件。这就厉害了。因为Claude不再是一个网页对话框。而是能直接操作你知识库的AI员工。你给它权限。它就能帮你创建文件、修改内容、整理结构。
装好之后打开Claude Code。告诉它你的Obsidian知识库文件夹在哪。就这么简单。Claude现在就知道了。你的所有笔记都在这。它可以随时读取。随时更新。随时帮你整理。这一步最关键。很多人搭不起来就是因为没把Claude指向正确文件夹。
这个神奇提示词分成两半
Andrej Karpathy写了一段url=https://www.jdon.com/91261-llm-personal-knowledge-system-Obsidian-wiki.htmlLLM Wiki提示词/url(见文章最后)。把复制贴给Claude。这个提示词定义了整套系统的玩法。它告诉Claude你的知识库长什么样。什么文件放哪。什么情况怎么处理。怎么总结新文章。怎么更新旧页面。怎么标记矛盾。怎么生成索引。这个提示词就是你的员工手册。
这个提示词的核心思想很简单。
大多数人的AI用法是RAG模式。你丢一堆文件给AI。AI在你提问的时候去文件里翻找相关内容。拼出一个答案给你。这个办法有用。但AI每次都得重新学习。每个问题都要重新翻所有文件。如果你问一个需要综合五篇文章的刁钻问题。AI得每次重新找、重新拼。知识不会积累。NotebookLM这样。ChatGPT上传文件这样。大多数RAG系统都这样。
但这个系统的玩法不一样。
AI不是在提问时才去翻原始文件。AI会持续搭建并维护一个永远存在的维基百科。这个维基百科是一堆结构化的、互相链接的Markdown文件。它们就放在你和原始资料中间。你丢进一篇新文章。AI不是把它存起来等以后检索。AI会读这篇文章。提取关键信息。然后把信息整合进现有的维基百科。更新相关的人物页面。修改主题总结。标注新数据和旧观点哪里矛盾。强化或者挑战正在形成的知识合成体。知识只编译一次。然后保持最新。而不是每次提问都重新推导。
关键区别就在这里。维基百科是一个持续存在、不断增值的知识产物。里面的互相引用早就建好了。矛盾点已经标出来了。知识合成体已经反映了你读过的一切。你每加一篇新文章。每问一个新问题。维基百科就变得更丰富。你基本不用自己写维基百科。AI全包了。你负责找资料、探索、问对问题。AI负责所有脏活累活。总结、引用、归档、记账。这些事让知识库真正有用且持久。
实际操作起来什么样
我的电脑桌面永远开着两个窗口。左边是Claude Code终端。右边是Obsidian。
Claude根据我们的对话不断修改维基百科。我实时看结果。点链接。看图。读更新好的页面。Obsidian就是开发环境。Claude就是程序员。维基百科就是代码库。这个比喻你记住了。整套逻辑就通了。
这个系统能干很多事。比如个人管理。你记录自己的目标、健康、心理、自我提升。把日记、文章、播客笔记丢进去。AI帮你拼出一个随时间变化的自画像。比如做研究。你几周几个月深入一个课题。读论文、文章、报告。AI一步步搭出完整的维基百科。里面有不断演化的论点。比如读一本书。你每读完一章就丢给AI。AI帮你建好人物页、主题页、情节线。最后你读完书。手上多了一本高质量的知识伴侣。就像那些粉丝花了几年搭出来的维基百科。你可以边读边搭。AI做所有引用和维护。
比如公司团队用。内部维基百科完全由AI维护。喂给它Slack聊天记录、会议转录稿、项目文档、客户电话记录。可能让人在发布前复查一下。但维护工作AI全包。没人愿意干的活AI干了。比如竞品分析、尽职调查、旅行规划、课程笔记、兴趣深挖。任何需要长期积累知识并且希望知识被组织好而不是散落一地的事情。这套系统都适用。
三层结构简单说清楚
原始资料层。你精心收集的源文件。文章、论文、图片、数据文件。这些文件AI只读不改。这是你的真理之源。维基百科层。一个文件夹装满了AI生成的Markdown文件。总结页、人物页、概念页、对比页、总览页、合成页。这一层完全归AI管。AI创建页面。有新资料时更新页面。维护互相引用。保持所有内容一致。你负责读。AI负责写。
架构层。一个配置文件。比如Claude Code用的CLAUDE.md或者Codex用的AGENTS.md。这个文件告诉AI维基百科怎么组织。用什么格式。新资料进来怎么处理。回答问题用什么流程。维护知识库怎么操作。这个配置文件是关键。它让AI变成一个守纪律的维基百科管理员。而不是一个随便聊天的机器人。你和AI一起慢慢改进这个文件。找到最适合你这个领域的方法。
三个核心操作必须会
第一个操作叫收录。你丢一个新资料进原始文件夹。告诉AI处理它。AI读这个资料。跟你讨论关键点。在维基百科里写一个总结页。更新目录。更新所有相关的人物页和概念页。最后在日志里加一条记录。一篇新资料可能触动10到15个维基百科页面。我个人喜欢一篇一篇地收。我会读AI写的总结。检查更新。告诉AI哪些点要强调。你也可以一次性收很多篇。少管一些。你按自己节奏来。把流程写进配置文件。以后每个会话都按这个流程走。
第二个操作叫查询。你向维基百科提问。AI搜索相关页面。读它们。合成一个有引用的答案。答案的形式取决于问题。可以是Markdown页面。可以是对比表格。可以是幻灯片。可以是图表。可以是画布。重点来了。好答案可以存回维基百科成为新页面。你问出来的对比。你发现的分析。你挖到的连接。这些东西很有价值。不能丢在聊天记录里消失。这样你的探索就像收录资料一样。在知识库里不断增值。
第三个操作叫检查。定期让AI给维基百科做健康检查。找找页面之间有没有矛盾。新资料推翻旧观点的地方有没有标出来。有没有孤立页面没人引用。重要概念提了很多次但没有自己的页面。缺不缺互相引用。有没有数据缺口需要上网搜索补上。AI很擅长建议新问题和新资料。维基百科长大了也保持健康。
两个特殊文件帮你导航
第一个是目录文件。它面向内容。列出维基百科里的每一个页面。每个页面带一个链接。一行总结。还有可选的元数据比如日期或者资料数量。按类别分组。人物、概念、资料来源等等。每次收录新资料。AI自动更新目录文件。你提问的时候。AI先读目录找到相关页面。再深入读具体页面。这个办法在小规模下非常管用。一百个资料。几百个页面。完全够用。不用搞什么向量检索RAG基础设施。
第二个是日志文件。它按时间顺序。只追加不修改。记录什么时间发生了什么。收录了什么资料。回答了什么问题。做了哪次检查。有个小技巧。每条记录用统一前缀开头。比如两个井号空格方括号里写日期空格收录空格文章标题。这样日志文件就能用简单的Unix命令解析。你用grep加正则就能拉出最后五条记录。日志文件给你一个时间线。你知道维基百科怎么一步步长起来的。AI也知道最近干了什么。
可选工具让系统更强
你可能想写点小工具帮AI更高效地操作维基百科。最明显的是搜索引擎。小规模时目录文件就够。但维基百科长大了你需要真正的搜索。有个工具叫qmd。它是本地搜索引擎。专门搜Markdown文件。混合了BM25和向量搜索。还能用AI重排序。全在你电脑上运行。它有命令行接口。AI可以调用它。它也有MCP服务器。AI可以当原生工具用。你也可以自己写个简单的搜索脚本。让AI帮你敲出来。需要的时候敲一个。
Obsidian网页剪辑器是个浏览器插件。能把网页文章转成Markdown。非常方便。快速把资料收进原始文件夹。图片要下载到本地。在Obsidian设置里。文件和链接那块。把附件文件夹路径设成一个固定目录。比如raw/assets。然后在快捷键设置里搜Download。找到下载当前文件的附件。绑一个快捷键。比如Ctrl加Shift加D。你剪辑完文章。按一下快捷键。所有图片就下到本地硬盘了。这个不是必须的。但很有用。AI可以直接看图片引用图片。不用依赖可能会失效的网址。注意AI不能一次性读带内嵌图片的Markdown。解决办法是先让AI读文字。再单独看部分或全部图片获取更多信息。有点笨但够用。
Obsidian的图谱视图最好用。你能一眼看出维基百科的结构。什么连着什么。哪些页面是枢纽。哪些页面是孤岛。Marp是基于Markdown的幻灯片格式。Obsidian有插件。你可以直接从维基百科内容生成演示文稿。Dataview是Obsidian插件。它能基于页面的前置元数据跑查询。如果你的AI给维基百科页面加了YAML前置元数据。标签、日期、资料数量。Dataview就能生成动态表格和列表。维基百科就是个Git仓库。全是Markdown文件。版本历史、分支、协作。全都白送。
这套系统为什么不会死
维护知识库最烦人的不是读也不是想。是记账。更新互相引用。保持总结最新。标注新数据和旧观点的矛盾。保持几十个页面的内容一致。人类放弃维基百科因为维护负担增长快过价值增长。AI不会无聊。不会忘记更新引用。一次性能改15个文件。维基百科能保持维护状态因为维护成本几乎为零。
人类的活是筛选资料。指导分析方向。问好问题。思考这些知识到底意味着什么。AI的活是所有其他事。这套思路的精神祖先可以追溯到1945年。Vannevar Bush提出了Memex。一个私人的、精心维护的知识库。文档之间有关联路径。Bush的愿景比后来互联网变成的样子更接近这套系统。私密的。主动维护的。文档之间的连接和文档本身一样有价值。他解决不了的问题是。谁来维护。现在AI来维护。
你过去一年消失的东西
想想过去一年你读过但忘光的书。想想那个改变你某个想法的播客。想想晚上十一点存了再也没打开的文章。想想那些教你比任何课程都多的YouTube兔子洞。想想你高亮过再也没回看的Kindle笔记。想想你做重大决定前做的调研。想想旧项目的笔记。想想那些失败经历里学到的教训。所有这些都散落在各处。什么用都没有。它们应该属于你的知识库。
就算你现在没有任何资料。打开Claude聊天。聊二十分钟。聊你的工作。你的目标。你在做什么。你在琢磨什么。把这次对话存成你的记忆文件。就这么点东西。足够让你第一次跟Claude对话时感觉它真的认识你。知识库不需要完整才有用。它只需要真实存在就行。
收录。你用网页剪辑器剪一篇文章。它落到原始文件夹。你告诉Claude。我刚加了一篇文章到原始资料文件夹。读它。提取关键观点。在维基百科的总结文件夹写一个总结页。更新目录文件。加一个链接和一行描述。更新这篇文章涉及的所有现有概念页。告诉我你动了哪些文件。一篇文章。Claude链接10到15个维基百科页面。挖出你没发现的连接。标出矛盾。记下所有改动。
查询。你问维基百科。Claude扫描目录。拉出相关页面。带引用回答你。然后把最好的答案存回维基百科。对比表。分析报告。新连接。这样洞察不会丢在聊天记录里。你的知识库持续增值。检查。每周跑一次。读维基百科里每个文件。找页面间矛盾。没有入向链接的孤立页面。反复提到但没有独立页面的概念。基于新资料显得过时的观点。写一份健康报告到维基百科目录。带上具体修法。你的知识库自动保持健康。维护不再是你的事。
自动化让你彻底躺赢
写一个Python脚本叫早晨摘要。功能有三。读记忆文件。找出今天到期的待办。读原始资料文件夹里过去24小时新增的文件。打印一份干净的简报在终端。然后把脚本配成定时任务。每天早上七点半跑。你设一次。每天早晨它自己跑。你不用碰任何东西。
读今天通话转录稿。提取每个决定。每个待办事项带负责人和截止日期。三行总结。把待办加到待办追踪文件。把决定记到决策日志文件。在客户文件夹创建一个客户笔记。链接回这份转录稿。每个决定都归档。每个待办都追踪。没有任何东西消失在聊天历史里。
为什么这玩意还没人尽皆知
原因很简单。搭好这套系统的人没讲清楚。需要它的人根本不知道它存在。大多数第二大脑项目都死在同一件事上。你一开始很有条理。维护开始堆积。更新标签。保持引用最新。结构变了要重组。这些都是正常工作之外的额外工作。你不做了。系统退化了。你回去用散装笔记。六个月后你试图重建。循环重复。
Claude永久打破这个循环。维护只是一条命令。重组整个知识库就是一个提示。从Notion迁移过来。一条命令处理每个导出的文件。加上正确的属性。把一切重组成你的新系统。人类的活是筛选资料。问好问题。思考所有知识的意义。Claude的活是所有其他事。总结。引用。归档。记账。这些事让知识库真正有用且持久。
Vannevar Bush在1945年描述过类似的东西。一个私人的、精心维护的知识库。文档之间的连接和文档本身一样有价值。他管它叫Memex。他解决不了的问题是。谁来维护。现在你知道谁来维护了。搭一次。用一辈子。你每加一天它就聪明一天。这就是为什么这玩意应该被判定为作弊。
最重要的一步是 Andrej Karpathy 的LLM Wiki提示:
code# LLM Wiki
A pattern for building personal knowledge bases using LLMs.
This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.
The core idea
Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.
The idea here is different. Instead of just retrieving from raw documents at query time, the LLM incrementally builds and maintains a persistent wiki — a structured, interlinked collection of markdown files that sits between you and the raw sources. When you add a new source, the LLM doesn't just index it for later retrieval. It reads it, extracts the key information, and integrates it into the existing wiki — updating entity pages, revising topic summaries, noting where new data contradicts old claims, strengthening or challenging the evolving synthesis. The knowledge is compiled once and then *kept current*, not re-derived on every query.
This is the key difference: the wiki is a persistent, compounding artifact. The cross-references are already there. The contradictions have already been flagged. The synthesis already reflects everything you've read. The wiki keeps getting richer with every source you add and every question you ask.
You never (or rarely) write the wiki yourself — the LLM writes and maintains all of it. You're in charge of sourcing, exploration, and asking the right questions. The LLM does all the grunt work — the summarizing, cross-referencing, filing, and bookkeeping that makes a knowledge base actually useful over time. In practice, I have the LLM agent open on one side and Obsidian open on the other. The LLM makes edits based on our conversation, and I browse the results in real time — following links, checking the graph view, reading the updated pages. Obsidian is the IDE; the LLM is the programmer; the wiki is the codebase.
This can apply to a lot of different contexts. A few examples:
- Personal: tracking your own goals, health, psychology, self-improvement — filing journal entries, articles, podcast notes, and building up a structured picture of yourself over time.
- Research: going deep on a topic over weeks or months — reading papers, articles, reports, and incrementally building a comprehensive wiki with an evolving thesis.
- Reading a book: filing each chapter as you go, building out pages for characters, themes, plot threads, and how they connect. By the end you have a rich companion wiki. Think of fan wikis like Tolkien Gateway(https://tolkiengateway.net/wiki/Main_Page) — thousands of interlinked pages covering characters, places, events, languages, built by a community of volunteers over years. You could build something like that personally as you read, with the LLM doing all the cross-referencing and maintenance.
- Business/team: an internal wiki maintained by LLMs, fed by Slack threads, meeting transcripts, project documents, customer calls. Possibly with humans in the loop reviewing updates. The wiki stays current because the LLM does the maintenance that no one on the team wants to do.
- Competitive analysis, due diligence, trip planning, course notes, hobby deep-dives — anything where you're accumulating knowledge over time and want it organized rather than scattered.
Architecture
There are three layers:
Raw sources — your curated collection of source documents. Articles, papers, images, data files. These are immutable — the LLM reads from them but never modifies them. This is your source of truth.
The wiki — a directory of LLM-generated markdown files. Summaries, entity pages, concept pages, comparisons, an overview, a synthesis. The LLM owns this layer entirely. It creates pages, updates them when new sources arrive, maintains cross-references, and keeps everything consistent. You read it; the LLM writes it.
The schema — a document (e.g. CLAUDE.md for Claude Code or AGENTS.md for Codex) that tells the LLM how the wiki is structured, what the conventions are, and what workflows to follow when ingesting sources, answering questions, or maintaining the wiki. This is the key configuration file — it's what makes the LLM a disciplined wiki maintainer rather than a generic chatbot. You and the LLM co-evolve this over time as you figure out what works for your domain.
Operations
Ingest. You drop a new source into the raw collection and tell the LLM to process it. An example flow: the LLM reads the source, discusses key takeaways with you, writes a summary page in the wiki, updates the index, updates relevant entity and concept pages across the wiki, and appends an entry to the log. A single source might touch 10-15 wiki pages. Personally I prefer to ingest sources one at a time and stay involved — I read the summaries, check the updates, and guide the LLM on what to emphasize. But you could also batch-ingest many sources at once with less supervision. It's up to you to develop the workflow that fits your style and document it in the schema for future sessions.
Query. You ask questions against the wiki. The LLM searches for relevant pages, reads them, and synthesizes an answer with citations. Answers can take different forms depending on the question — a markdown page, a comparison table, a slide deck (Marp), a chart (matplotlib), a canvas. The important insight: good answers can be filed back into the wiki as new pages. A comparison you asked for, an analysis, a connection you discovered — these are valuable and shouldn't disappear into chat history. This way your explorations compound in the knowledge base just like ingested sources do.
Lint. Periodically, ask the LLM to health-check the wiki. Look for: contradictions between pages, stale claims that newer sources have superseded, orphan pages with no inbound links, important concepts mentioned but lacking their own page, missing cross-references, data gaps that could be filled with a web search. The LLM is good at suggesting new questions to investigate and new sources to look for. This keeps the wiki healthy as it grows.
Indexing and logging
Two special files help the LLM (and you) navigate the wiki as it grows. They serve different purposes:
index.md is content-oriented. It's a catalog of everything in the wiki — each page listed with a link, a one-line summary, and optionally metadata like date or source count. Organized by category (entities, concepts, sources, etc.). The LLM updates it on every ingest. When answering a query, the LLM reads the index first to find relevant pages, then drills into them. This works surprisingly well at moderate scale (~100 sources, ~hundreds of pages) and avoids the need for embedding-based RAG infrastructure.
log.md is chronological. It's an append-only record of what happened and when — ingests, queries, lint passes. A useful tip: if each entry starts with a consistent prefix (e.g. 2026-04-02 ingest | Article Title), the log becomes parseable with simple unix tools — grep "^## \" log.md | tail -5 gives you the last 5 entries. The log gives you a timeline of the wiki's evolution and helps the LLM understand what's been done recently.
Optional: CLI tools
At some point you may want to build small tools that help the LLM operate on the wiki more efficiently. A search engine over the wiki pages is the most obvious one — at small scale the index file is enough, but as the wiki grows you want proper search. qmd(https://github.com/tobi/qmd) is a good option: it's a local search engine for markdown files with hybrid BM25/vector search and LLM re-ranking, all on-device. It has both a CLI (so the LLM can shell out to it) and an MCP server (so the LLM can use it as a native tool). You could also build something simpler yourself — the LLM can help you vibe-code a naive search script as the need arises.
Tips and tricks
- Obsidian Web Clipper is a browser extension that converts web articles to markdown. Very useful for quickly getting sources into your raw collection.
- Download images locally. In Obsidian Settings → Files and links, set "Attachment folder path" to a fixed directory (e.g. raw/assets/). Then in Settings → Hotkeys, search for "Download" to find "Download attachments for current file" and bind it to a hotkey (e.g. Ctrl+Shift+D). After clipping an article, hit the hotkey and all images get downloaded to local disk. This is optional but useful — it lets the LLM view and reference images directly instead of relying on URLs that may break. Note that LLMs can't natively read markdown with inline images in one pass — the workaround is to have the LLM read the text first, then view some or all of the referenced images separately to gain additional context. It's a bit clunky but works well enough.
- Obsidian's graph view is the best way to see the shape of your wiki — what's connected to what, which pages are hubs, which are orphans.
- Marp is a markdown-based slide deck format. Obsidian has a plugin for it. Useful for generating presentations directly from wiki content.
- Dataview is an Obsidian plugin that runs queries over page frontmatter. If your LLM adds YAML frontmatter to wiki pages (tags, dates, source counts), Dataview can generate dynamic tables and lists.
- The wiki is just a git repo of markdown files. You get version history, branching, and collaboration for free.
Why this works
The tedious part of maintaining a knowledge base is not the reading or the thinking — it's the bookkeeping. Updating cross-references, keeping summaries current, noting when new data contradicts old claims, maintaining consistency across dozens of pages. Humans abandon wikis because the maintenance burden grows faster than the value. LLMs don't get bored, don't forget to update a cross-reference, and can touch 15 files in one pass. The wiki stays maintained because the cost of maintenance is near zero.
The human's job is to curate sources, direct the analysis, ask good questions, and think about what it all means. The LLM's job is everything else.
The idea is related in spirit to Vannevar Bush's Memex (1945) — a personal, curated knowledge store with associative trails between documents. Bush's vision was closer to this than to what the web became: private, actively curated, with the connections between documents as valuable as the documents themselves. The part he couldn't solve was who does the maintenance. The LLM handles that.
Note
This document is intentionally abstract. It describes the idea, not a specific implementation. The exact directory structure, the schema conventions, the page formats, the tooling — all of that will depend on your domain, your preferences, and your LLM of choice. Everything mentioned above is optional and modular — pick what's useful, ignore what isn't. For example: your sources might be text-only, so you don't need image handling at all. Your wiki might be small enough that the index file is all you need, no search engine required. You might not care about slide decks and just want markdown pages. You might want a completely different set of output formats. The right way to use this is to share it with your LLM agent and work together to instantiate a version that fits your needs. The document's only job is to communicate the pattern. Your LLM can figure out the rest./code