Anthropic has begun rolling out its Claude Memory feature to paid subscribers, starting with Claude Max users who now have immediate access. Claude Pro subscribers will gain access in the coming days. This expansion follows earlier releases to Enterprise and Team plans, marking a strategic step in Anthropic’s effort to deepen personalization and continuity in AI interactions.
Memory allows Claude to retain and reference details across conversations, reducing the need to re-explain context. As Anthropic puts it, “the friction of starting over disappears.” The company emphasizes that each interaction now builds on prior ones—whether refining startup pitches, accumulating research insights, or maintaining consistent code environments.
“Every conversation now builds on the last,” Anthropic writes. “Research papers build on accumulated sources and insights. Startup pitches evolve with each iteration. Code remembers environment setup and patterns.”
A demo illustrates this capability: a user requests help drafting a quarterly self-review. Claude automatically retrieves relevant accomplishments from past discussions, synthesizing them into a structured summary—showcasing how Memory streamlines knowledge retrieval and workflow continuity.
How Memory Works: Control, Transparency, and Integration
To activate Memory, users must enable two settings:
- Search and reference chats
- Generate memory from chat history
Located in the Settings menu, these toggles activate Memory’s core functionality. What distinguishes Claude, according to Anthropic, is transparency: users can view, edit, and manage what the AI remembers—using natural language. For example, you can say, “Focus on my project goals,” or “Forget our conversation about X.”
Additional features include:
- Selective memory creation: Choose whether to generate memories from new or past chats.
- Project isolation: Memories are scoped per Project, preventing cross-contamination between personal and
professional contexts. - Import/export support: Seamlessly transfer memories between Claude, ChatGPT, and Gemini—enhancing
interoperability across AI platforms. - Incognito mode: Temporarily disable memory recording for sensitive discussions.
This level of user control aligns with Anthropic’s emphasis on responsible AI development. Prior to launch, the company conducted extensive testing on sensitive topics—assessing risks such as reinforcing harmful patterns, over-accommodation, or attempts to bypass safety protocols. The goal: ensure Memory enhances assistance without compromising safety or autonomy.



