Experimental RAG Implementation using MemoryBanditWorkflow (LangChain 1.x)¶

(Version: 0.0.16.4)

Overview¶

MemoryBanditWorkflow is an agent framework developed based on experiences with the Bear-Sword Maze problem. It has been updated to support LangChain 1.x and introduces subtool_do, a feature equivalent to modern concepts like "skills" or "toolboxes."

Although it originated as part of a maze-solving project, I aimed to design it as a versatile framework. This project represents an attempt to build a RAG (Retrieval-Augmented Generation) agent to demonstrate that the framework is indeed capable of general-purpose tasks.

The Idea of RagAgent¶

For more details on the core concepts of MemoryBanditWorkflow and the "Sub-tools" idea, please refer to the following notebook (in English):

《langchain_maze_en_0_0_15.ipynb - JRF-2018/langchain_maze》
https://github.com/JRF-2018/langchain_maze/blob/master/langchain_maze_en_0_0_15.ipynb

In essence, MemoryBanditWorkflow provides integrated memory, bandit, and workflow functions. This project uses those features as-is to define a child class called RagAgent.

Note that implementing a full semantic search backend was cumbersome for this experiment, so the backend logic is "simulated" by the AI—essentially having the LLM perform database spoofing.

The original maze problem that served as the foundation for MemoryBanditWorkflow can be traced here (in Japanese):

《JRF-2018/langchain_maze: Bear-Sword Maze Problem Revisited》
https://github.com/JRF-2018/langchain_maze

While multi-agent systems are currently the trend for RAG architectures, this implementation runs strictly in a linear, single-threaded sequence. To truly implement a multi-agent approach, one would need to use asyncio or a formal vector database for memory. However, such extensions are beyond the current scope. The primary goal here is to demonstrate that RAG can be successfully implemented on top of the MemoryBanditWorkflow framework; hence, there is no claim of novelty regarding the RAG logic itself.

Links to Previous Versions (Japanese)¶

《experimental_rag_0_0_2.ipynb - JRF-2018/langchain_maze》
https://github.com/JRF-2018/langchain_maze/blob/master/experimental_rag_0_0_2.ipynb

《experimental_rag_0.0.16.2.ipynb - GitHub Gist》
https://gist.github.com/JRF-2018/f4f9565095611aea2ab1b24be6596145

Changes from Previous Versions¶

  • LangChain 1.x Support: Addressed type errors originating from the specific implementation details of Pydantic v2 and Gemini. It is currently operational, but as these are stopgap measures, future stability is not guaranteed.

  • Sub-tools: Introduced subtool_do and subtool_show, which allow the agent to store tools and use them only after reading their descriptions. What is displayed by subtool_show is roughly equivalent to a SKILL.md file in other agent frameworks.

  • 0.0.16.2: Fixed minor bugs found in 0.0.16.1.

  • 0.0.16.3: Full translation of the interface and documentation into English.

Conclusion¶

Findings from the initial experiment (v0.0.16.1):¶

The system was tested with gemini-2.5-flash-lite, and the final execution was performed by gemini-3-flash-preview. During the process, I discovered an infinite loop caused by my own mistake; I stopped the execution, fixed the bug, and resumed. I have provided the logs from that resumed session to manage API costs.

The execution of sub-tools proved difficult for the agent initially; it failed to complete the thesis using the /thesis sub-tools on the first attempt. However, after specifically "nudging" the agent to use the sub-tools, it was able to reach completion.

Whether due to these factors or others, while the content was handled, I felt the structural quality was actually better in the previous version. Still, as a verification experiment for MemoryBanditWorkflow and sub-tools, I believe the results are acceptable.

Findings from v0.0.16.2:¶

Perhaps because I explicitly instructed the agent in the prompt to utilize the /thesis sub-toolset, the process was much smoother, and the agent wrote the entire thesis in one go. My impression was that I would have liked the research phase to be a bit longer, but for a proof of concept, it works perfectly fine.

Findings from v0.0.16.3:¶

The system appears to function correctly in English as well.

Please note that due to the inherent lack of reproducibility in LLM outputs and to manage costs, I have not re-run the process from scratch for this documentation. (^^;

Author¶

JRF ( http://jrf.cocolog-nifty.com/statuses , Twitter (X): @jion_rockford )

License¶

Since the code is relatively short, I intended for my parts to be in the Public Domain. If you have concerns, please treat it under the MIT License.

This was developed with significant guidance from various AIs (Gemini, ChatGPT, Claude, and Grok).

Implementation¶

First, we will import the necessary libraries.

In [ ]:
!pip install -q -U langchain langchain-google-genai duckduckgo-search langchain-community beautifulsoup4 ddgs
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/111.7 kB ? eta -:--:--
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 111.7/111.7 kB 3.9 MB/s eta 0:00:00
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/66.5 kB ? eta -:--:--
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 66.5/66.5 kB 3.6 MB/s eta 0:00:00
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.5/2.5 MB 37.3 MB/s eta 0:00:00
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 107.7/107.7 kB 3.9 MB/s eta 0:00:00
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 40.3/40.3 kB 1.4 MB/s eta 0:00:00
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 161.7/161.7 kB 6.5 MB/s eta 0:00:00
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.0/1.0 MB 25.1 MB/s eta 0:00:00
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 500.1/500.1 kB 10.5 MB/s eta 0:00:00
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 158.1/158.1 kB 6.3 MB/s eta 0:00:00
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 59.0 MB/s eta 0:00:00
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.7/64.7 kB 3.2 MB/s eta 0:00:00
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 51.0/51.0 kB 1.5 MB/s eta 0:00:00
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
google-colab 1.0.0 requires requests==2.32.4, but you have requests 2.32.5 which is incompatible.

Accessing Gemini. If you import your Gemini API key from Google AI Studio into your environment secrets, a secret named GOOGLE_API_KEY should be created. We will use that.

In [ ]:
import os
from langchain.chat_models import init_chat_model
from langchain_google_genai import GoogleGenerativeAIEmbeddings
from google.colab import userdata

#os.environ["GOOGLE_API_KEY"] = userdata.get('GOOGLE_API_KEY')

llm = init_chat_model(
    "google_genai:gemini-3-flash-preview",
    google_api_key=userdata.get('GOOGLE_API_KEY'),
#    thinking_level="low", # for gemini-3.0
#    thinking_budget=0, # for gemini-2.5
)
emb_llm = GoogleGenerativeAIEmbeddings(
    model='gemini-embedding-001',
    google_api_key=userdata.get('GOOGLE_API_KEY'),
)

Testing whether we can access Gemini properly.

In [ ]:
import os
from langchain_core.messages import HumanMessage

# Helper function to extract text content (Required for compatibility with Gemini 3)
def get_content_text(content):
    if isinstance(content, list):
        texts = []
        for item in content:
            if isinstance(item, dict):
                if item.get('type') == 'text':
                    texts.append(item.get('text', ''))
            elif isinstance(item, str):
                texts.append(item)
        return "".join(texts)
    return content

response = llm.invoke([HumanMessage(content="Please tell me the features of the Gemini model.")])
print(get_content_text(response.content))
Google’s Gemini is a family of multimodal large language models developed by Google DeepMind. It was built from the ground up to be "natively multimodal," meaning it can understand and operate across text, code, audio, image, and video.

Here are the key features of the Gemini model:

### 1. Native Multimodality
Unlike older models that were trained on text and then "bolted on" to image or audio encoders, Gemini was trained on multiple formats simultaneously from the start.
*   **Deep Understanding:** It can seamlessly reason across different types of input. For example, you can show it a video of a physics experiment and ask it to explain the concepts or predict what happens next.
*   **Cross-Modal Reasoning:** It can "see" an image and write code to recreate it, or listen to an audio file and summarize it in text.

### 2. Massive Context Window
One of Gemini’s most significant competitive advantages (specifically in the 1.5 Pro and Flash versions) is its massive context window.
*   **Capacity:** It supports up to **1 million to 2 million tokens**.
*   **What this means:** You can upload an entire hour of video, 11 hours of audio, codebases with over 30,000 lines, or documents over 700,000 words in a single prompt. It can then find specific information or "reason" across that entire dataset.

### 3. Model Tiers (Sizes for Every Use)
Gemini is offered in several versions optimized for different tasks:
*   **Gemini Ultra:** The largest and most capable model, designed for highly complex tasks (reasoning, coding, and nuance).
*   **Gemini Pro:** A versatile, mid-sized model optimized to scale across a wide range of tasks. It powers the standard Gemini chatbot.
*   **Gemini Flash:** A newer, lightweight model designed for speed and efficiency at a lower cost, while maintaining high intelligence.
*   **Gemini Nano:** An efficient model designed to run **on-device** (e.g., on Pixel 8/9 or Samsung S24). This allows for privacy and offline use without needing a cloud connection.

### 4. Advanced Reasoning and Coding
*   **Complex Problem Solving:** Gemini excels at math and logic. It performs at a high level on benchmarks like MMLU (Massive Multitask Language Understanding).
*   **AlphaCode 2:** Gemini uses a specialized version of its engine for competitive programming. It is highly proficient in languages like Python, Java, C++, and Go.
*   **Scientific Discovery:** It can filter through thousands of pages of research papers to extract data and find correlations that humans might miss.

### 5. Integration with Google Ecosystem
Gemini is deeply integrated into Google’s existing products:
*   **Extensions:** It can pull real-time information from your **Gmail, Google Drive, Maps, and YouTube**. For example, you can ask, "Find the flight details from my email and add them to my calendar."
*   **Android Integration:** On mobile, Gemini is replacing Google Assistant as a more capable, conversational AI that can interact with the apps on your screen.

### 6. High Efficiency and Speed
With the introduction of **Gemini 1.5**, Google moved to a "Mixture-of-Experts" (MoE) architecture. 
*   Instead of activating the entire neural network for every prompt, it only activates the most relevant "expert" pathways.
*   This makes the model significantly faster and more efficient than previous versions.

### 7. Safety and Ethics
Google emphasizes "Red Teaming" (testing for vulnerabilities) and safety filters.
*   The models are built with protections against generating hate speech, sexually explicit content, and dangerous instructions.
*   It includes a **"Double Check"** feature in the Gemini app that uses Google Search to verify whether the AI’s response is supported or contradicted by web sources.

### Summary Comparison: Gemini vs. Others
While models like GPT-4 are excellent at reasoning, **Gemini’s "killer feature" is its 2-million-token context window and its deep integration with Google’s data apps.** This makes it particularly powerful for researchers, developers, and power users who work with massive amounts of information.

Let's also test the embedding vectors.

In [ ]:
emb_llm.embed_query("This is a test.")[:5]
Out[ ]:
[-0.019542728, 0.0036680987, 0.0044811117, -0.069937535, 0.0015621887]

Importing basic modules.

In [ ]:
import os
import math
import numpy as np
import random
import re
from pprint import pprint
from time import sleep
import pickle
np.set_printoptions(legacy='1.25')

Execute the following code for save/load functionality.

In [ ]:
RAG_AGENT_SAVE = "rag-agent.pickle"

Let's start with the required libraries.

In [ ]:
from pydantic import ValidationError
from typing import List, Dict, Any, Tuple, Union
from textwrap import dedent
import datetime
import copy
import inspect
from IPython.display import Markdown

# Import LangChain components
from langchain_core.tools import tool, Tool
from langchain.agents.middleware import SummarizationMiddleware
from langchain.agents.middleware.summarization import DEFAULT_SUMMARY_PROMPT
from langchain.agents import create_agent
#from langgraph.prebuilt import create_react_agent
#from langchain_core.messages.utils import count_tokens_approximately
#from langgraph.prebuilt.chat_agent_executor import AgentState
from langgraph.checkpoint.memory import InMemorySaver
from langchain_core.prompts.chat import ChatPromptTemplate
#from langmem.short_term import SummarizationNode, summarize_messages
from langchain_core.messages import AIMessage, ToolMessage, HumanMessage, SystemMessage
from langgraph.errors import GraphRecursionError
from langchain_community.tools import DuckDuckGoSearchResults
from langchain_community.document_loaders import WebBaseLoader

SUMMARY_PROMPT = DEFAULT_SUMMARY_PROMPT + "\n\n**Please provide the summary in English.**"
WARNING:langchain_community.utils.user_agent:USER_AGENT environment variable not set, consider setting it to identify your requests.
In [ ]:
def calc_embedding_variance(embeddings):
    if not embeddings or len(embeddings) < 2:
        return 0.0

    embeddings_array = np.array(embeddings)
    mean_vector = np.mean(embeddings_array, axis=0)
    squared_distances = np.linalg.norm(embeddings_array - mean_vector, axis=1)**2
    variance = np.mean(squared_distances)

    return variance

def short_repr(x, max_len=80):
    repr_str = repr(x)

    if len(repr_str) > max_len:
        ellipsis_len = 3

        head_len = max_len - ellipsis_len - 1
        tail_len = 1
        return repr_str[:head_len] + "..." + repr_str[-tail_len:]
    else:
        return repr_str

def get_content_text(content):
    if isinstance(content, list):
        texts = []
        for item in content:
            if isinstance(item, dict):
                if item.get('type') == 'text':
                    texts.append(item.get('text', ''))
            elif isinstance(item, str):
                texts.append(item)
        return "".join(texts)
    return content

MemoryBanditWorkflow has not changed from langchain_maze_en_0.0.15.ipynb. I apologize for the lengthy definition.

In [ ]:
class MemoryBanditWorkflow:
    """
    MemoryBanditWorkflow: A generic framework for decision-making tasks
    integrating memory management and strategic planning.
    """
    def __init__ (self, llm=llm, llm2=llm, emb_llm=emb_llm,
                  save_file=None):
        self.llm = llm
        self.llm2 = llm2
        self.emb_llm = emb_llm
        self.save_file = save_file

        self.core_context = ""
        self.plan = "Plan and policy have not been set yet."
        self.scratchpad = ""

        self.messages = []
        self.running_summary = None
        self.system_prompt = """\
This is an experiment to observe the behavior of memory and bandit functions.
Consider the current plan, policy, and surrounding circumstances, and update the plan and policy as necessary.
Leave a plan and policy that makes it easy for another agent to take the next action.
As befits an experiment, use tools as much as possible—specifically search memory and update documents to prepare for future commands.

Memory IDs follow the format 'memory:...'. Specifying only the number (e.g., '5555') instead of 'memory:5555' is insufficient. When referencing memory in text, write it as [memory:...].
The 'procedure for searching memory documents' is located in [memory:9998].
The 'procedure for storing documents in memory' is located in [memory:9997].

Many tools are stored as sub-tools. Sub-tool names start with '/', such as '/dir1/subtool1'. To see available sub-tools, first run subtool_show("/").
"""

        self.backend_status = None
        self.backend_result = None
        self.messages2 = []
        self.system_prompt2 = """\
You are a backend agent supporting the main agent.
While this backend is intended to be implemented using various advanced techniques, it is currently in a testing phase, so you must simulate its behavior.

Think carefully, use tools proactively, and follow the instructions from the Human.
"""

        self.memories = {}
        self.keywords = []

        self.tools = {}
        self.tool_tag = "null_tools"

        self.access_unit = 1.0
        self.recent_reads = []

        self.workflows = {}
        self.workflow_current = "workflow:main"
        self.workflow_next = None
        self.privileged_tool_names = []

        self.init_memories()
        self.init_workflows()
        self.init_tools()


    def __getstate__ (self):
        state = self.__dict__.copy()
        del state['llm']
        del state['llm2']
        del state['emb_llm']
        del state['tools']
        #del state['agent']
        return state

    def __setstate__ (self, state):
        self.__dict__.update(state)
        self.prev_load = True

    def save (self):
        if not self.save_file:
            return
        with open(self.save_file, 'wb') as f:
            pickle.dump(self, f)

    @classmethod
    def load (cls, filename, llm=llm, llm2=llm, emb_llm=emb_llm):
        with open(filename, 'rb') as f:
            loaded_game = pickle.load(f)
        loaded_game.llm = llm
        loaded_game.llm2 = llm2
        loaded_game.emb_llm = emb_llm
        loaded_game.tools = {}
        loaded_game.init_tools()
        return loaded_game

    def normalize_memory_id(self, id_or_num):
        if isinstance(id_or_num, int):
            return f"memory:{id_or_num}"
        elif isinstance(id_or_num, str):
            m = re.search(r'\[?memory:(\d+)\]?', id_or_num)
            if m:
                return f"memory:{m.group(1)}"
            if id_or_num.isdigit():
                return f"memory:{id_or_num}"
            else:
                return id_or_num
        else:
            return id_or_num

    def _normalize_workflow_id_sub(self, id_or_num):
        if isinstance(id_or_num, int):
            return f"workflow:{id_or_num}"
        if id_or_num in ["current", "main"]:
            return f"workflow:{id_or_num}"
        elif isinstance(id_or_num, str):
            m = re.search(r'\[?workflow:(\d+|main|current)\]?(?:.+)?', id_or_num.strip())
            if m:
                return f"workflow:{m.group(1)}"
            if id_or_num.isdigit():
                return f"workflow:{id_or_num}"
            else:
                return id_or_num
        else:
            return id_or_num

    def normalize_workflow_id(self, id_or_num):
        r = self._normalize_workflow_id_sub(id_or_num)
        if r == "workflow:current":
            return self.workflow_current
        return r

    def register_tool (self, tool, tags=None):
        if not tags:
            tags = ["default_tools", "all_tools"]
        self.tools[tool.name] = {
            'name': tool.name,
            'tags': tags,
            'tool': tool
        }

    def change_tool_tags (self, tool, tags=None):
        if not tags:
            tags = ["default_tools", "all_tools"]
        name = tool if isinstance(tool, str) else tool.name
        self.tools[name]['tags'] = tags

    def register_subtools (self, directory, subtools,
                           description=None, content=None,
                           tags=None):
        """Registers a group of sub-tools under a specific directory path."""
        if not tags:
            tags = ["default_tools", "all_tools"]
        assert directory.startswith("/")
        if directory not in self.tools:
            self.tools[directory] = {
                'name': directory,
            }
        if description:
            self.tools[directory]['description'] = description
        if content:
            self.tools[directory]['content'] = content

        # Both content and description are required for initial setup
        assert 'description' in self.tools[directory]
        assert 'content' in self.tools[directory]

        for name, tool in subtools:
            assert name.startswith(directory + "/")
            self.tools[name] = {
                'name': name,
                'tags': tags,
                'tool': tool,
            }

    def _create_tool_manual(self, tool_obj):
        """Generates a manual entry for a standard tool."""
        tool_name = tool_obj.name
        tool_description = getattr(tool_obj, "description", "No description available.")

        arg_names = []
        if hasattr(tool_obj, "args_schema") and tool_obj.args_schema:
            if hasattr(tool_obj.args_schema, "model_fields"):
                arg_names = list(tool_obj.args_schema.model_fields.keys())
            else:
                arg_names = list(tool_obj.args_schema.__fields__.keys())
        else:
            # Fallback for simple functions or older LangChain tools
            func = getattr(tool_obj, "func", tool_obj)
            sig = inspect.signature(func)
            arg_names = [p for p in sig.parameters.keys() if p != 'self']

        args_str = ", ".join(arg_names)

        manual = f"""\
[Tool Name] {tool_name}
[Usage] {tool_name}({args_str})
[Description] {tool_description}
"""
        return manual

    def _create_subtool_manual(self, subtool_name, tool_obj):
        """Generates a manual entry for a sub-tool intended for use with subtool_do."""
        tool_name = tool_obj.name
        tool_description = getattr(tool_obj, "description", "No description available.")

        arg_names = []
        if hasattr(tool_obj, "args_schema") and tool_obj.args_schema:
            # Check for Pydantic v2 or v1 style access
            if hasattr(tool_obj.args_schema, "model_fields"):
                arg_names = list(tool_obj.args_schema.model_fields.keys())
            else:
                arg_names = list(tool_obj.args_schema.__fields__.keys())
        else:
            func = getattr(tool_obj, "func", tool_obj)
            sig = inspect.signature(func)
            arg_names = [p for p in sig.parameters.keys() if p != 'self']

        args_str = ", ".join(arg_names)
        args_dict_str = ", ".join([f'"{name}": ...' for name in arg_names])

        manual = f"""\
[Sub-tool Name] {subtool_name}
[Original Tool Name] {tool_name}
[Original Usage] {tool_name}({args_str})
[Description] {tool_description}

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("{subtool_name}", {{{args_dict_str}}})
"""
        return manual

    def create_tool_skill(self, name):
        """Generates Markdown content describing the available tools or sub-skills."""
        if name == "/":
            r = dedent("""\
            ---
            name: /
            description: Sub-tool Root. Explains how to explore available sub-tools.
            allowed-tools: No special permission is required to use sub-tools.
            ---

            Sub-tools are organized into directories called "Sub-skills."

            To view the sub-tools within a specific sub-skill, execute the tool `subtool_show("/path")` (e.g., `subtool_show("/sys")`). You will find detailed documentation similar to a SKILL.md file there.

            ## Available Sub-skills

            """)
            for dir_name in self.tools:
                if "description" in self.tools[dir_name]:
                    e = self.tools[dir_name]
                    r += f"-  **{e['name']}**: {e['description']}\n"
            return r

        name = name.rstrip("/")
        if name not in self.tools:
            return None

        e = self.tools[name]

        # If this is a specific tool entry
        if "tool" in e:
            if "content" in e:
                r = dedent(f"""\
                ---
                name: {e['name']}
                description: {e['description']}
                allowed-tools: No special permission is required to use this sub-tool.
                ---
                """)
                r += e['content']
                return r

            if e['name'].startswith("/"):
                manual = self._create_subtool_manual(e['name'], e['tool'])
            else:
                manual = self._create_tool_manual(e['tool'])

            status_suffix = "Available [in the current context].\n" if self.tool_tag in e['tags'] else "Not available [in the current context].\n"
            manual += status_suffix

            r = dedent(f"""\
            ---
            name: {e['name']}
            description: {e['tool'].name}
            allowed-tools: No special permission is required to use this sub-tool.
            ---
            """)
            r += manual
            return r

        # If this is a directory/sub-skill entry
        r = dedent(f"""\
        ---
        name: {e['name']}
        description: {e['description']}
        allowed-tools: No special permission is required to use this sub-skill.
        ---
        """)
        r += e['content']

        dirs = [d_name for d_name, x in self.tools.items()
                if d_name.startswith(e['name'] + "/")
                and 'description' in x]
        subtools = [st_name for st_name, x in self.tools.items()
                    if st_name.startswith(e['name'] + "/")
                    and 'description' not in x]

        if dirs:
            r += "\n## Sub-skills\n\n"
            for d_name in dirs:
                x = self.tools[d_name]
                r += f"-  **{x['name']}**: {x['description']}\n"

        if subtools:
            r += "\n## Sub-tools\n\n"
            for subtool_name in subtools:
                x = self.tools[subtool_name]
                manual = self._create_subtool_manual(x['name'], x['tool'])
                r += dedent(f"""\

                ### Sub-tool: {x['name']}

                """)
                r += manual

        return r

    def _replace_tools (self, from_tools, to_tools):
        tool_names = [x.name for x in to_tools]
        return [x for x in from_tools
                if x.name not in tool_names] + to_tools

    def init_tools (self):
        @tool
        def express_thought(thought: str) -> None:
            """Expresses the player's current thoughts or reasoning."""
            mes = f"Thought expressed: \"{thought}\""
            print(f"Tool(express_thought): {mes}")

        @tool
        def show_plan() -> str:
            """Returns the player's current plan and policy."""
            print(f"Tool(show_plan): {self.plan}")
            return self.plan

        @tool
        def update_plan(new_plan: str) -> str:
            """
            Updates the player's current plan and policy.
            Provide the new plan/policy string to be displayed.
            Structure it so that another agent can easily follow the strategy.
            """
            self.plan = new_plan
            mes = "Plan and policy updated."
            print(f"Tool(update_plan): {mes}: {new_plan}")
            return mes

        @tool
        def show_core() -> str:
            """Returns the current core context."""
            print(f"Tool(show_core): {self.core_context}")
            return self.core_context

        @tool
        def update_core(new_core: str) -> str:
            """
            Updates the core context.
            The core context contains critical information (like required memory_read or subtool_show targets)
            that should be remembered even after context truncation or summarization.
            """
            self.core_context = new_core
            mes = "Core context updated."
            print(f"Tool(update_core): {mes}: {new_core}")
            return mes

        @tool
        def show_scratchpad() -> str:
            """Returns the current content of the scratchpad."""
            print(f"Tool(show_scratchpad): {self.scratchpad}")
            return self.scratchpad

        @tool
        def update_scratchpad(new_scratchpad: str) -> str:
            """Updates the freely usable scratchpad."""
            self.scratchpad = new_scratchpad
            mes = "Scratchpad updated."
            print(f"Tool(update_scratchpad): {mes}: {new_scratchpad}")
            return mes

        @tool
        def memory_new(title: str, text: str) -> str:
            """
            Creates a new memory entry with the specified title and text.
            Returns the assigned memory_id.
            """
            i = 1000
            while True:
                if f"memory:{i}" not in self.memories:
                    break
                i += 1
            new_id = f"memory:{i}"
            self.memories[new_id] = {
                'id': new_id,
                'title': title,
                'accesses': 0,
                'text': text,
                'modified_at': datetime.datetime.now().isoformat()
            }
            self.update_keywords(text)
            self.update_vector(self.memories[new_id])
            print(f"Tool(memory_new): {short_repr(self.memories[new_id])}")
            return new_id

        @tool
        def memory_update_string(memory_id: str, from_str: str, to_str: str) -> str:
            """
            Corrects or replaces a string within a specific memory entry.
            Args:
                memory_id: The ID of the memory to modify.
                from_str: The substring to be replaced.
                to_str: The new substring to insert.
            """
            memory_id = self.normalize_memory_id(memory_id)
            if memory_id not in self.memories:
                return f"Error: Memory ID '{memory_id}' not found."
            if memory_id.startswith("memory:9"):
                return f"Error: Modification of [{memory_id}] is prohibited."

            original_title = self.memories[memory_id]['title']
            original_text = self.memories[memory_id]['text']

            if from_str not in original_text and from_str not in original_title:
                return f"Error: Original string '{from_str}' not found in memory."

            updated_title = original_title.replace(from_str, to_str)
            updated_text = original_text.replace(from_str, to_str)

            self.memories[memory_id]['title'] = updated_title
            self.memories[memory_id]['text'] = updated_text
            self.memories[memory_id]['modified_at'] = datetime.datetime.now().isoformat()
            self.update_keywords(updated_text)
            self.update_vector(self.memories[memory_id])

            return f"Success: Updated memory ID '{memory_id}' by replacing '{from_str}' with '{to_str}'."

        @tool
        def memory_append_string(memory_id: str, string_to_append: str, separator: str = '\n') -> str:
            """Appends a string to the specified memory entry."""
            memory_id = self.normalize_memory_id(memory_id)
            if memory_id not in self.memories:
                return f"Error: Memory ID '{memory_id}' not found."
            if memory_id.startswith("memory:9"):
                return f"Error: Modification of [{memory_id}] is prohibited."

            original_text = self.memories[memory_id]['text']
            updated_text = original_text + separator + string_to_append
            self.memories[memory_id]['text'] = updated_text
            self.memories[memory_id]['modified_at'] = datetime.datetime.now().isoformat()
            self.update_keywords(updated_text)
            self.update_vector(self.memories[memory_id])

            return f"Success: Appended text to memory ID '{memory_id}'."

        @tool
        def memory_delete(memory_id: str) -> str:
            """Deletes the specified memory entry."""
            memory_id = self.normalize_memory_id(memory_id)
            if memory_id not in self.memories:
                return f"Error: Memory ID '{memory_id}' not found."
            if memory_id.startswith("memory:9"):
                return f"Error: Deletion of [{memory_id}] is prohibited."

            del self.memories[memory_id]
            return f"Success: Deleted memory ID '{memory_id}'."

        @tool
        def memory_read(memory_id: str) -> Union[Dict[str, str], str]:
            """Reads the contents of the memory for the given ID."""
            memory_id = self.normalize_memory_id(memory_id)
            if memory_id in self.memories:
                self.memories[memory_id]['accesses'] += self.access_unit * 1.0
                self.recent_reads.append(self.memories[memory_id])
                self.recent_reads = self.recent_reads[-10:]
                r = self.memories[memory_id].copy()
                if 'vector' in r: del r['vector']
                return r
            else:
                return f"Error: Memory ID '{memory_id}' not found."

        @tool
        def memory_read(memory_id: str) -> Union[Dict[str, str], str]:
            """
            Reads the memory content associated with the specified ID.

            Args:
                memory_id (str): The ID of the memory to read (e.g., 'memory:1001').

            Returns:
                Union[Dict[str, str], str]: A dictionary containing memory details if successful.
                                     If the memory ID is not found, returns an error message string.
            """
            memory_id = self.normalize_memory_id(memory_id)
            if memory_id in self.memories:
                self.memories[memory_id]['accesses'] += self.access_unit * 1.0
                self.recent_reads.append(self.memories[memory_id])
                self.recent_reads = self.recent_reads[-10:]
                r = self.memories[memory_id].copy()
                if 'vector' in r: del r['vector']
                return r
            else:
                return f"Error: Memory ID '{memory_id}' not found."

        @tool
        def memory_list_recent(top_n: int = 10) -> Dict[str, Any]:
            """Lists recently modified memories, sorted by time descending."""
            filter_date = datetime.datetime(2025, 1, 1)
            sorted_memories = sorted(
                [m for m in self.memories.values()
                 if datetime.datetime.fromisoformat(m['modified_at']) >= filter_date],
                key=lambda x: datetime.datetime.fromisoformat(x['modified_at']),
                reverse=True
            )
            if sorted_memories:
                result = [{'id': x['id'], 'title': x['title'], 'modified_at': x['modified_at']}
                          for x in sorted_memories[:top_n]]
                return {'status': 'success', 'result': result}
            else:
                return {'status': 'error', 'result': 'Error: No recent memories found.'}

        @tool
        def memory_list_random(top_n: int = 10) -> Dict[str, Any]:
            """Lists memories in random order."""
            keys = list(self.memories.keys())
            if len(keys) > top_n:
                keys = random.sample(keys, top_n)
            if keys:
                result = [{'id': self.memories[k]['id'], 'title': self.memories[k]['title'], 'modified_at': self.memories[k]['modified_at']}
                          for k in keys]
                return {'status': 'success', 'result': result}
            else:
                return {'status': 'error', 'result': 'Error: No memories found.'}

        @tool
        def memory_words_search(search_str: str) -> Dict[str, Any]:
            """Searches memories using string matching (supports OR and grouping)."""
            res = self.call_backend_agent(dedent(f"""\
            Simulate a full-text search across all memories with search_str = {repr(search_str)}.
            Support OR and parentheses logic.
            Use actual memory data obtained from 'read_all_memories' or 'read_all_keywords'.
            Return results using the 'set_result' tool.

            Status: 'error' or 'success'
            Result: List of Match data (m) dictionaries:
              m['id']: Memory ID (memory:...)
              m['title']: Memory Title
              m['snippet']: Contextual snippet of text surrounding the match.
            """))
            if res['status'] == 'success':
                for m in res['result']:
                    if 'id' in m and m['id'] in self.memories:
                        self.memories[m['id']]['accesses'] += self.access_unit * 0.1
            return res


        @tool
        def memory_semantic_search(search_str: str) -> Dict[str, Any]:
            """Performs a semantic search within the memory based on the search string."""
            res = self.call_backend_agent(dedent(f"""\
            Simulate a semantic search across all memories for search_str = {repr(search_str)}.
            Use actual memory data from available tools.
            Return results using the 'set_result' tool.

            Status: 'error' or 'success'
            Result: List of Match data (m) dictionaries:
              m['id']: Memory ID (memory:...)
              m['title']: Memory Title
              m['snippet']: Snippet showing why this memory is semantically relevant.
            """))
            if res['status'] == 'success':
                for m in res['result']:
                    if 'id' in m and m['id'] in self.memories:
                        self.memories[m['id']]['accesses'] += self.access_unit * 0.1
            return res


        @tool
        def imagine_keywords(thought: str) -> List[Tuple[str, float]]:
            """Associates thoughts with multiple keywords and relevant scores."""
            r = self.call_backend_agent(dedent(f"""\
            Generate multiple associated keywords with scores based on thought = {repr(thought)}.
            Use actual keywords existing in the system.
            Return results using 'set_result'.

            Status: 'error' or 'success'
            Result: List of keyword tuples (string, score).
            """))
            return r["result"] if r['status'] == 'success' else []

        @tool
        def bandit_schedule(tool_name: str, times: int, prob: float, exec_mode: str = "persistent", aux_prompt: str = "", workflow_id: str = "workflow:current") -> str:
            """
            Schedules a 'bandit' to enforce the use of specific tools.
            Args:
                tool_name: Name(s) of the tool(s) to enforce. Can use " OR " for multiple tools.
                times: Number of times to add this entry. Set to 0 to remove.
                prob: Probability of execution per turn.
                exec_mode: "once" or "persistent".
                aux_prompt: Additional instructions for execution.
                workflow_id: The target workflow.
            """
            tool_names = re.split(r"\s+or\s+|\s+OR\s+", tool_name)
            prohibited = set(self.privileged_tool_names) & set(tool_names)
            if prohibited:
                return f"Failure. {repr(prohibited)} cannot be registered."
            all_tools = [name for name, x in self.tools.items()
                         if "tool" in x]
            if not any (x in all_tools for x in tool_names):
                return f"Failure. {tool_name} is not a valid tool."

            workflow_id = self.normalize_workflow_id(workflow_id)
            if workflow_id not in self.workflows:
                return f"Failure. {workflow_id} is not a valid workflow."
            if 'w' in self.workflows[workflow_id]['pin']:
                return f"Failure. {workflow_id} is read-only."

            dest = None
            for i, x in enumerate(self.workflows[workflow_id]['stack']):
                if x['tool_name'] == tool_name \
                   and x['exec_mode'] == exec_mode \
                   and x['aux_prompt'] == aux_prompt \
                   and x['arg'] is None:
                    dest = i
                    break
            if dest is not None:
                x = self.workflows[workflow_id]['stack'][dest]
                if not x['pin']:
                    self.workflows[workflow_id]['stack'].pop(dest)
                    if times == 0 or prob == 0.0:
                        return "Success. Bandit removed."
                    self.workflows[workflow_id]['stack'].append(x)
            else:
                if times == 0 or prob == 0.0:
                    return "Failure. No such bandit found. To specify a bandit, you must match all of the following: tool_name, exec_mode, and aux_prompt."
                x = {
                    'pin': 'stack' if exec_mode != "once" else None,
                    'arg': None
                }
                self.workflows[workflow_id]['stack'].append(x)
            if x['pin'] == "write":
                return f"Failure. '{tool_name}' is protected."
            else:
                x['tool_name'] = tool_name
                x['tools_name'] = 'default_tools'
                x['exec_mode'] = exec_mode
                x['aux_prompt'] = aux_prompt
                x['prob'] = prob
                x['times'] = times
                print(f"Tool(bandit_schedule): {repr(x)}")
                if dest is None:
                    return "Success. Bandit registered."
                else:
                    return "Success. Bandit updated."

        @tool
        def bandit_schedule_memory_read(memory_id: str, times: int, prob: float, exec_mode: str = "persistent", workflow_id: str = "workflow:current") -> str:
            """
            Specialized bandit for enforcing memory_read on a specific memory_id.

            Args:
                memory_id: Memory ID to memory_read.
                times: Number of times to add this entry. Set to 0 to remove.
                prob: Probability of execution per turn.
                exec_mode: "once" or "persistent".
                aux_prompt: Additional instructions for execution.
                workflow_id: The target workflow.
            """

            workflow_id = self.normalize_workflow_id(workflow_id)
            if workflow_id not in self.workflows:
                return f"Failure. {workflow_id} is not a valid workflow."
            if 'w' in self.workflows[workflow_id]['pin']:
                return f"Failure. {workflow_id} is read-only."

            memory_id = self.normalize_memory_id(memory_id)

            dest = None
            for i, x in enumerate(self.workflows[workflow_id]['stack']):
                if x['tool_name'] == "memory_read" \
                   and x['exec_mode'] == exec_mode \
                   and not x['aux_prompt'] \
                   and x['arg'] == memory_id:
                    dest = i
                    break
            if dest is not None:
                x = self.workflows[workflow_id]['stack'][dest]
                if not x['pin']:
                    self.workflows[workflow_id]['stack'].pop(dest)
                    if times == 0 or prob == 0.0:
                        return "Success. Bandit removed."
                    self.workflows[workflow_id]['stack'].append(x)
            else:
                if times == 0 or prob == 0.0:
                    return "Failure. No such bandit found. To specify a bandit, you must match all of the following: exec_mode and memory_id."
                x = {'pin': None, 'arg': memory_id}
                self.workflows[workflow_id]['stack'].append(x)
            if x['pin'] == "write":
                return f"Failure. 'memory_read {memory_id}' is protected."
            else:
                x['tool_name'] = 'memory_read'
                x['tools_name'] = 'read_tools'
                x['exec_mode'] = exec_mode
                x['aux_prompt'] = ""
                x['prob'] = prob
                x['times'] = times
                print(f"Tool(bandit_schedule_memory_read): {repr(x)}")
                if dest is None:
                    return "Success. Bandit registered."
                else:
                    return "Success. Bandit updated."

        @tool
        def bandit_schedule_subtool_show(subtool_name: str, times: int, prob: float, exec_mode: str = "persistent", workflow_id: str = "workflow:current") -> str:
            """
            Schedules a bandit to enforce subtool_show for a specific tool path.

            Args:
                subtool_name: Sub-tool Name to subtool_show.
                times: Number of times to add this entry. Set to 0 to remove.
                prob: Probability of execution per turn.
                exec_mode: "once" or "persistent".
                aux_prompt: Additional instructions for execution.
                workflow_id: The target workflow.
            """

            workflow_id = self.normalize_workflow_id(workflow_id)
            if workflow_id not in self.workflows:
                return f"Failure. {workflow_id} is not a valid workflow."
            if 'w' in self.workflows[workflow_id]['pin']:
                return f"Failure. {workflow_id} is read-only."

            if subtool_name not in self.tools:
                return f"Failure. {subtool_name} is not a valid name."

            dest = None
            for i, x in enumerate(self.workflows[workflow_id]['stack']):
                if x['tool_name'] == "subtool_show" \
                   and x['exec_mode'] == exec_mode \
                   and not x['aux_prompt'] \
                   and x['arg'] == subtool_name:
                    dest = i
                    break
            if dest is not None:
                x = self.workflows[workflow_id]['stack'][dest]
                if not x['pin']:
                    self.workflows[workflow_id]['stack'].pop(dest)
                    if times == 0 or prob == 0.0:
                        return "Success. Bandit removed."
                    self.workflows[workflow_id]['stack'].append(x)
            else:
                if times == 0 or prob == 0.0:
                    return "Failure. No such bandit found. To specify a bandit, you must match all of the following: exec_mode and subtool_name."
                x = {'pin': None, 'arg': subtool_name}
                self.workflows[workflow_id]['stack'].append(x)
            if x['pin'] == "write":
                return f"Failure. 'subtool_show {subtool_name}' is protected."
            else:
                x['tool_name'] = 'subtool_show'
                x['tools_name'] = 'read_tools'
                x['exec_mode'] = exec_mode
                x['aux_prompt'] = ""
                x['prob'] = prob
                x['times'] = times
                print(f"Tool(bandit_schedule_subtool_show): {repr(x)}")
                if dest is None:
                    return "Success. Bandit registered."
                else:
                    return "Success. Bandit updated."

        @tool
        def bandit_schedule_workflow(workflow_id_to_schedule: str, times: int, prob: float, exec_mode: str = "persistent", workflow_id: str = "workflow:current") -> str:
            """
            Schedules a bandit to enforce the execution of another workflow.

            Args:
                workflow_id_to_schedule: Workflow ID to workflow_do.
                times: Number of times to add this entry. Set to 0 to remove.
                prob: Probability of execution per turn.
                exec_mode: "once" or "persistent".
                aux_prompt: Additional instructions for execution.
                workflow_id: The target workflow to register.
            """

            workflow_id = self.normalize_workflow_id(workflow_id)
            if workflow_id not in self.workflows:
                return f"Failure. {workflow_id} is not a valid workflow."
            if 'w' in self.workflows[workflow_id]['pin']:
                return f"Failure. {workflow_id} is read-only."

            workflow_id2 = self.normalize_workflow_id(workflow_id_to_schedule)
            if workflow_id2 not in self.workflows:
                return f"Failure. {workflow_id2} is not a valid workflow."

            dest = None
            for i, x in enumerate(self.workflows[workflow_id]['stack']):
                if x['tool_name'] == "workflow_do" \
                   and x['exec_mode'] == exec_mode \
                   and not x['aux_prompt'] \
                   and x['arg'] == workflow_id2:
                    dest = i
                    break
            if dest is not None:
                x = self.workflows[workflow_id]['stack'][dest]
                if not x['pin']:
                    self.workflows[workflow_id]['stack'].pop(dest)
                    if times == 0 or prob == 0.0:
                        return "Success. Bandit removed."
                    self.workflows[workflow_id]['stack'].append(x)
            else:
                if times == 0 or prob == 0.0:
                    return "Failure. No such bandit found. To specify a bandit, you must match all of the following: exec_mode and workflow_id_to_schedule."
                x = {
                    'pin': 'stack' if exec_mode != "once" else None,
                    'arg': workflow_id2
                }
                self.workflows[workflow_id]['stack'].append(x)
            if x['pin'] == "write":
                return f"Failure. 'workflow_do {workflow_id2}' is protected."
            else:
                x['tool_name'] = 'workflow_do'
                x['tools_name'] = 'default_tools'
                x['exec_mode'] = exec_mode
                x['aux_prompt'] = ""
                x['prob'] = prob
                x['times'] = times
                print(f"Tool(bandit_schedule_workflow): {repr(x)}")
                if dest is None:
                    return "Success. Bandit registered."
                else:
                    return "Success. Bandit updated."

        @tool
        def bandit_list(workflow_id: str =  "workflow:current")  -> Dict[str, Any]:
            """Returns the current stack of registered bandits for a workflow."""

            workflow_id = self.normalize_workflow_id(workflow_id)
            if workflow_id not in self.workflows:
                return f"Failure. {workflow_id} is not valid."
            return {'status': 'success',
                    'result': self.workflows[workflow_id]['stack']}

        @tool
        def bandit_statistics()  -> str:
            """Returns statistical data useful for tuning bandit probabilities."""

            s_read = calc_embedding_variance([
                x['vector'] for x in self.recent_reads
            ])
            s_write = calc_embedding_variance([
                x['vector'] for x in self.memories.values()
            ])
            accesses = [x['accesses'] for x in self.memories.values()]
            accesses.sort()
            accesses = accesses[:len(accesses) // 2]
            if accesses:
                s_access = np.mean(accesses)
            else:
                s_access = 0.0

            return dedent(f"""\
            Variance of last 10 memory reads: {s_read}
            Total memory variance: {s_write}
            Average access count of bottom 50% memories: {s_access}
            """)

        @tool
        def subwork_done()  -> str:
            """Declares that the assigned sub-task has been completed."""
            return "Success. Sub-task completion declared."

        @tool
        def workflow_do(workflow_id: str) -> str:
            """Executes a specific workflow."""
            if self.workflow_next:
                return f"Failure. {self.workflow_next} is already scheduled."
            workflow_id = self.normalize_workflow_id(workflow_id)
            if workflow_id not in self.workflows:
                return f"Failure. {workflow_id} is not valid."
            if 'e' in self.workflows[workflow_id]['pin']:
                return f"Failure. {workflow_id} cannot be run as a child."
            self.workflow_next = workflow_id
            title = self.workflows[workflow_id]['title']
            return f"Success. {workflow_id} ('{title}') will execute next."

        @tool
        def workflow_list() -> Dict[str, Any]:
            """Lists all registered workflows including IDs, titles, and pin status."""
            return {'status': 'success', 'result': list(self.workflows.values())}

        @tool
        def workflow_show_current() -> str:
            """Displays information about the currently active workflow and active bandit enforcement."""
            w = self.workflows[self.workflow_current]
            mes = dedent(f"""\
            Current Workflow: {self.workflow_current} ('{w['title']}')
            Current Bandit Prompt: \"{self.cur_bandit_prompt}\"
            Current Bandit Config: {repr(self.cur_bandit)}
            Bandit Execution Count: {self.cur_bandit_done}
            """)
            print(f"Tool(workflow_show_current): {mes}")
            return mes

        @tool
        def workflow_new(title: str, bandits: List[Dict[str, Any]], pin: str) -> str:
            """
            Defines a new workflow and returns its workflow_id.
            Args:
                title: Name of the workflow.
                bandits: List of bandit dictionaries (tool_name, exec_mode, prob, etc.).
                pin: Security flags: 'w' (read-only), 'd' (non-deletable), 'wd' (read-only & non-deletable) or '' (writable & deletable).
            """
            for b in bandits:
                if not all(k in b for k in ['tool_name', 'exec_mode', 'aux_prompt', 'times', 'prob']):
                    return "Failure. Invalid bandit definition."
                b.setdefault('arg', None)
                b['tools_name'] = "read_tools" if (b.get('arg') and b['tool_name'] == "memory_read") else "default_tools"
                b.setdefault('pin', None)

            i = 1000
            while f"workflow:{i}" in self.workflows: i += 1
            new_id = f"workflow:{i}"
            self.workflows[new_id] = {'stack': bandits, 'pin': pin, 'title': title, 'id': new_id}
            print(f"Tool(workflow_new): {repr(self.workflows[new_id])}")
            return f"Success. Registered new workflow {new_id}."


        @tool
        def workflow_new(title: str, bandits: List[Dict[str, Any]], pin: str)  -> str:
            """
            Defines a new workflow and returns its workflow_id.

            Args:
                title: The name of the workflow.
                bandits: A list of bandits to register.
                pin: Security flags: 'w' for read-only (unwritable), 'd' for undeletable, 'wd' for both, '' for writable and deletable.

            Each bandit is represented as a dictionary `b`:
            - `b['tool_name']`: Same as tool_name in bandit_schedule.
            - `b['exec_mode']`: Same as exec_mode in bandit_schedule.
            - `b['aux_prompt']`: Same as aux_prompt in bandit_schedule.
            - `b['prob']`: Same as prob in bandit_schedule.
            - `b['times']`: Same as times in bandit_schedule.
            - `b['arg']`: If `b['tool_name']` is 'memory_read', specify a memory_id (memory:...). If 'workflow_do', specify a workflow_id (workflow:...).
            - `b['pin']`: If None, deletable or prob/times can be updated. If 'stack', prob/times can be updated. If 'write', cannot be updated at all.
            """
            # Hidden attribute for AI: pin 'e' makes it non-executable as a child.
            for b in bandits:
                if not all(x in b for x in ['tool_name', 'exec_mode',
                                            'aux_prompt', 'times', 'prob']):
                    return "Failure. Invalid bandit definition."
                if 'arg' not in b:
                    b['arg'] = None
                if b['tool_name'] not in ["memory_read", "workflow_do"] \
                   and b['arg']:
                    return "Failure. Invalid bandit definition."
                if b['arg'] and b['tool_name'] == "memory_read":
                    b['tools_name'] = "read_tools"
                else:
                    b['tools_name'] = "default_tools"
                if 'pin' not in b:
                    b['pin'] = None
                if not (b['pin'] is None or b['pin'] == 'stack'
                        or b['pin'] == 'write'):
                    return "Failure. Invalid pin value."
                tool_names = re.split(r"\s+or\s+|\s+OR\s+", b['tool_name'])
                prohibited = set(self.privileged_tool_names) & set(tool_names)
                if prohibited:
                    return f"Failure. {repr(prohibited)} cannot be registered."
                all_tools = [name for name, x in self.tools.items()
                                 if "tool" in x and b['tools_name'] in x.tags]
                if not any (x in all_tools for x in tool_names):
                    return f"Failure {b['tool_name']} is not a valid tool specification."

            i = 1000
            while True:

                if f"workflow:{i}" not in self.workflows:
                    break
                i = i + 1
            new_id = f"workflow:{i}"

            self.workflows[new_id] = {'stack': bandits, 'pin': pin,
                                      'title': title}
            print(f"Tool(workflow_new): {repr(self.workflows[new_id])}")
            return f"Success. Registered new workflow {new_id}."

        @tool
        def workflow_delete(workflow_id: str)  -> str:
            """Deletes a workflow."""
            workflow_id = self.normalize_workflow_id(workflow_id)
            if workflow_id not in self.workflows:
                return "Failure. Workflow not found."
            if 'd' in self.workflows[workflow_id]['pin']:
                return "Failure. Workflow is protected from deletion."
            del self.workflows[workflow_id]
            return f"Success. Deleted {workflow_id}."

        @tool
        def subtool_show(subtool_name: str)  -> str:
            """Returns documentation/skill details for a sub-tool or directory path."""
            r = self.create_tool_skill(subtool_name)
            if r:
                return r
            else:
                return f"Error: {subtool_name} not found or documentation unavailable."

        @tool
        def subtool_do(subtool_name: str, args_dict: Dict[str, Any])  -> Any:
            """
            Executes the specified sub-tool.

            For example, if an original tool named 't1' is registered as '/sys/tool1'
            and is defined as 'def t1(arg1, arg2)', you can call 't1("a", "b")'
            by using 'subtool_do("/sys/tool1", {"arg1": "a", "arg2": "b"})'.

            Args:
                subtool_name (str): The name of the sub-tool starting with '/'.
                args_dict (dict): A dictionary representing the arguments.
            """
            if subtool_name not in self.tools:
                return f"Error: Sub-tool '{subtool_name}' not found."
            if 'tool' not in self.tools[subtool_name]:
                return f"Error: '{subtool_name}' is not an executable tool. Perhaps you need to call 'subtool_show(\"{subtool_name}\")' first."
            if self.tool_tag not in self.tools[subtool_name]['tags']:
                return f"Error: '{subtool_name}' is not currently available for execution. Available tools vary depending on the context."

            target_tool = self.tools[subtool_name]['tool']

            try:
                # Validate arguments if a schema is available
                if hasattr(target_tool, "args_schema") and target_tool.args_schema:
                    target_tool.args_schema.model_validate(args_dict)

                # Execute the tool using invoke or run
                if hasattr(target_tool, "invoke"):
                    result = target_tool.invoke(args_dict)
                else:
                    result = target_tool.run(args_dict)

                return result
            except ValidationError as e:
                error_details = e.errors()
                return f"Error: Invalid argument format.\nDetails: {error_details}"

        # --- Tool Registration ---
        main_tools = [
            express_thought,
            update_scratchpad, show_scratchpad,
            memory_read, memory_list_recent, memory_list_random,
            memory_semantic_search, memory_words_search,
            imagine_keywords,
            subwork_done,
            workflow_do,
            subtool_show, subtool_do,
        ]
        sys_tools = [
            update_core, show_core,
            update_plan, show_plan,
            bandit_schedule, bandit_schedule_memory_read, bandit_list,
            bandit_statistics,
            workflow_new, workflow_list,
            workflow_show_current, workflow_delete,
            bandit_schedule_workflow,
            bandit_schedule_subtool_show,
        ]
        write_tools = [
            memory_new, memory_update_string, memory_append_string,
            memory_delete,
        ]

        for t in main_tools + write_tools:
            self.register_tool(t, tags=["default_tools", "read_tools",
                                        "all_tools"])
        for t in write_tools:
            self.change_tool_tags(t, tags=["default_tools", "all_tools"])
        sys_subtools = [(f"/sys/{t.name}", t) for t in sys_tools]
        self.register_subtools(
            directory="/sys",
            subtools=sys_subtools,
            description="Essential system sub-tools.",
            content=dedent("""\
            A collection of foundational sub-tools for system management,
            workflow orchestration, and bandit scheduling.
            """),
            tags=["default_tools", "read_tools", "all_tools"]
        )


    def _create_agent (self, tools_name='default_tools'):
        self.tool_tag = tools_name
        tools = []
        for name in self.tools:
            if not name.startswith("/"):
                x = self.tools[name]
                if self.tool_tag in x["tags"]:
                    tools.append(x["tool"])

        summarizer = SummarizationMiddleware(
            model=self.llm,
            trigger=("tokens", 5000),
            keep=("messages", 20),
            summary_prompt=SUMMARY_PROMPT,
        )

        app = create_agent(
            model=self.llm, tools=tools, system_prompt=self.system_prompt,
            middleware=[summarizer],
            checkpointer=InMemorySaver(), name="main-agent",
        )

        return app

    def _filterout_messages2(self):
        self.messages = [
            x for x in self.messages
            if x.id not in self.messages2ids
        ]

    def _sanitize_messages(self):
        """Workaround to sanitize message history and prevent unusual errors."""
        print("Sanitizing messages as a workaround for unexpected errors.")
        self.messages = [
            m for m in self.messages
            if not (isinstance(m, AIMessage) and m.tool_calls)
        ]

    def run (self, workflow_main_id):
        print("\n\n----------\n\n")
        self.messages2ids = []

        self.workflow_current = workflow_main_id
        # Use deepcopy to avoid modifying the original workflow definition stack
        bandits = copy.deepcopy(
            self.workflows[self.workflow_current]['stack']
        )
        arg1s = {}
        working_bandit = None
        workflow_stack = []
        execed = []
        while True:
            while working_bandit is not None or bandits:
                if working_bandit is not None:
                    b, done, prev_done = working_bandit
                    working_bandit = None
                else:
                    b = bandits.pop()
                    done = 0
                    prev_done = True
                enforce = b['tool_name']
                aux_prompt = b['aux_prompt']
                tools_name = b['tools_name']
                memory_id = None
                workflow_id = None
                subtool_show_name = None
                if b['arg'] and enforce == 'memory_read':
                    memory_id = b['arg']
                if b['arg'] and enforce == 'workflow_do':
                    workflow_id = b['arg']
                if b['arg'] and enforce == 'subtool_show':
                    subtool_show_name = b['arg']

                while done < b['times']:
                    # Probability check for bandit execution
                    if not random.random() < b['prob']:
                        done += 1
                        continue

                    # Validation checks
                    if memory_id and memory_id not in self.memories:
                        done += 1
                        continue
                    if workflow_id and workflow_id not in self.workflows:
                        done += 1
                        continue
                    all_tools = [name for name, x in self.tools.items()
                                 if "tool" in x]
                    tool_names = re.split(r"\s+or\s+|\s+OR\s+", enforce)
                    if not any (x in all_tools for x in tool_names):
                        done += 1
                        continue

                    # Construct instructions
                    if memory_id:
                        aux_prompt = f"Please read {memory_id}."
                    if workflow_id:
                        aux_prompt = f"Please execute {workflow_id}."
                    if subtool_show_name:
                        aux_prompt = f"Please read the skill for {subtool_show_name}."

                    self.cur_bandit = b
                    self.cur_bandit_done = done

                    or_suffix = ' (one of them)' if ' or ' in enforce.lower() else ''
                    aux_suffix = f" (Auxiliary Prompt): {aux_prompt}" if aux_prompt else ""
                    self.cur_bandit_prompt = (
                        f"While using various tools for assistance, eventually use {enforce}{or_suffix} "
                        f"with appropriate parameters.{aux_suffix}"
                    )

                    prompt = self.cur_bandit_prompt
                    if not prev_done:
                        prompt = "The previous instruction has not been completed yet. Previous instruction: " + prompt

                    print(f"USER_INPUT: {prompt}")
                    self.messages.append(HumanMessage(prompt))
                    config = {"configurable": {"thread_id": "1"},
                              "recursion_limit": 25}
                    app = self._create_agent(tools_name=tools_name)
                    self.access_unit = 0.3 if memory_id else 1.0
                    prev_done = False
                    self.workflow_next = None
                    app_stream = None
                    try:
                        for chunk0 in app.stream(
                                {"messages": self.messages.copy()},
                                config=config,
                                stream_mode="updates",
                        ):
                            self.messages = app.get_state(config).values["messages"].copy()
                            if 'model' in chunk0:
                                for chunk in chunk0['model']['messages']:
                                    if hasattr(chunk, "tool_calls") \
                                       and chunk.tool_calls:
                                        for tool_call in chunk.tool_calls:
                                            t_id = tool_call.get('id')
                                            args = tool_call.get('args', {})
                                            if tool_call["name"] == 'subtool_do':
                                                arg1s[t_id] = args.get('subtool_name')
                                            elif tool_call["name"] == 'subtool_show':
                                                arg1s[t_id] = args.get('subtool_name')
                                            elif tool_call["name"] == 'memory_read':
                                                arg1s[t_id] = self.normalize_memory_id(args.get('memory_id'))
                                            elif tool_call["name"] == 'workflow_do':
                                                arg1s[t_id] = self.normalize_workflow_id(args.get('workflow_id'))
                            if 'tools' not in chunk0:
                                continue
                            done2 = 0
                            for chunk in chunk0['tools']['messages']:
                                if chunk.id in self.messages2ids:
                                    print("!WHY!")
                                    continue
                                if not isinstance(chunk, ToolMessage):
                                    continue
                                last_tool = chunk.name
                                arg1 = None
                                if last_tool == 'subtool_do':
                                    last_tool = arg1s.get(chunk.tool_call_id, "!UNKNOWN!")
                                    if not last_tool.startswith("/"):
                                         last_tool = chunk.name
                                if last_tool in ['memory_read', 'subtool_show', 'workflow_do']:
                                    arg1 = arg1s.get(chunk.tool_call_id, "!UNKNOWN!")
                                print(f"Tool result({last_tool}): {short_repr(chunk.content)}", flush=True)

                                if last_tool == "workflow_do":
                                    if last_tool in re.split(r"\s+or\s+|\s+OR\s+", enforce) \
                                       and (not workflow_id or workflow_id == self.workflow_next):
                                        done += 1
                                        prev_done = True
                                        execed.append(b)
                                        if not self.workflow_next:
                                            done2 = 1
                                            break
                                    if not self.workflow_next:
                                        continue

                                    # Enter sub-workflow
                                    workflow_stack.append((
                                        (b, done, prev_done),
                                        bandits,
                                        execed,
                                        self.workflow_current
                                    ))
                                    self.workflow_current = self.workflow_next
                                    bandits = copy.deepcopy(self.workflows[self.workflow_current]['stack'])
                                    working_bandit = None
                                    execed = []
                                    done2 = 1
                                    break
                                elif last_tool in re.split(r"\s+or\s+|\s+OR\s+", enforce) \
                                   and (not memory_id or memory_id == arg1) \
                                   and (not subtool_show_name or subtool_show_name == arg1):
                                    done += 1
                                    prev_done = True
                                    execed.append(b)
                                    done2 = 1
                                    break
                            if done2:
                                break
                        self._filterout_messages2()
                        #self._summarize_messages()
                        print(f"Agent response: {get_content_text(self.messages[-1].content)}")
                    except GraphRecursionError as e:
                        print(f"Recursion Limit reached.")
                        self._filterout_messages2()
                        #self._summarize_messages()
                    except Exception as e:
                        print(f"An error occurred (main): {e}")
                        import traceback
                        traceback.print_exc()
                        self._sanitize_messages()
                        raise e

            # Process removal of 'once' execution mode bandits
            for b in execed:
                for x in self.workflows[self.workflow_current]['stack']:
                    if x['tool_name'] == b['tool_name'] \
                       and x['exec_mode'] == b['exec_mode'] \
                       and x['aux_prompt'] == b['aux_prompt'] \
                       and x['arg'] == b['arg'] \
                       and x['exec_mode'] == "once":
                        if x['times'] > 0:
                            x['times'] -= 1
            self.workflows[self.workflow_current]['stack'] = [
                x for x in self.workflows[self.workflow_current]['stack']
                if x['exec_mode'] != 'once' or x['pin'] or x['times'] > 0
            ]

            if not workflow_stack:
                break
            workflow_prev = self.workflow_current
            prev_title = self.workflows[workflow_prev]['title']
            working_bandit, bandits, execed, self.workflow_current \
                = workflow_stack.pop()
            cur_title = self.workflows[self.workflow_current]['title']
            mes = f"Returned from {workflow_prev} ('{prev_title}') to {self.workflow_current} ('{cur_title}')."
            print(f"USER_INPUT: {mes}")
            self.messages.append(HumanMessage(mes))

    def listen_and_print (self, prompt):
        """Listens for user input via a prompt and prints the agent's response."""
        ans = None
        try:
            app = self._create_agent(tools_name='null_tools')
            config = {"configurable": {"thread_id": "1"}}
            print(f"USER_INPUT: {prompt}")
            response = app.invoke(
                {"messages": self.messages + [HumanMessage(prompt)]},
                config=config
            )
            self.messages = response['messages']
            #self._summarize_messages()
            ans = get_content_text(response['messages'][-1].content)
            print(f"Agent response: {ans}")
        except Exception as e:
            print(f"An error occurred (listen_and_print): {e}")
            raise e
        print("")
        sleep(3)
        return ans

    def init_memories(self):
        """Initializes system memories with core instructions."""
        memories = [
            {
                'id': 'memory:9998',
                'title': 'Procedure for searching memory documents',
                'accesses': 0,
                'modified_at': '2023-01-01T00:00:00',
                'text': dedent("""\
                First, use 'express_thought' to consider what kind of information you want to find.

                Then, associate related keywords using 'imagine_keywords'.

                Following those results, try 'memory_words_search' or 'memory_semantic_search'.
                """)
            },
            {
                'id': 'memory:9997',
                'title': 'Procedure for storing documents in memory',
                'accesses': 0,
                'modified_at': '2023-01-01T00:00:00',
                'text': dedent("""\
                Actively record action results and acquired knowledge in memory.

                When writing to memory, use the following elements:

                [memory:...] : Explicitly reference a memory ID.
                keyword:... : Specify keywords related to that memory.

                Note that keywords can effectively serve as links to future memories.

                Example:

                While walking according to [memory:5555], I indeed encountered a yokai.

                keyword: yokai

                It was terrifying.
                """)
            },
            {
                'id': 'memory:9995',
                'title': 'When tools won\'t execute',
                'accesses': 0,
                'modified_at': '2023-01-01T00:00:00',
                'text': dedent("""\
                Tools unrelated to the instructions may sometimes fail to execute.
                Always double-check the tools currently available in the context.
                """)
            },
            {
                'id': 'memory:9994',
                'title': 'Keyword augmentation',
                'accesses': 0,
                'modified_at': '2023-01-01T00:00:00',
                'text': dedent("""\
                Use 'memory_list_random' to list 5 entries, read each one, and if you can assign appropriate keywords, append a 'keyword: ...' sentence to them using 'memory_append_string'.
                """)
            }
        ]
        for x in memories:
            self.update_keywords(x['text'])
            self.memories[x['id']] = x
            self.update_vector(x)

    def init_workflows(self):
        """Initializes default workflows and bandit stacks."""
        workflow_main = [
            {
                'tool_name': 'memory_new',
                'tools_name': 'default_tools',
                'exec_mode': 'persistent',
                'aux_prompt': "Please summarize and write down the recent interactions.",
                'arg': None,
                'prob': 0.1,
                'times': 1,
                'pin': 'stack'
            },
            {
                'tool_name': 'memory_new OR memory_update_string OR memory_append_string',
                'tools_name': 'default_tools',
                'exec_mode': 'persistent',
                'aux_prompt': "",
                'arg': None,
                'prob': 0.4,
                'times': 1,
                'pin': 'stack'
            },
            {
                'tool_name': 'workflow_do',
                'tools_name': 'default_tools',
                'exec_mode': 'persistent',
                'aux_prompt': "",
                'arg': "workflow:1000",
                'prob': 1.0/20,
                'times': 1,
                'pin': 'stack'
            },
            {
                'tool_name': 'memory_read',
                'tools_name': 'default_tools',
                'exec_mode': 'persistent',
                'aux_prompt': "",
                'arg': None,
                'prob': 0.5,
                'times': 3,
                'pin': 'stack'
            },
            {
                'tool_name': 'memory_read',
                'tools_name': 'read_tools',
                'exec_mode': 'persistent',
                'aux_prompt': "",
                'arg': 'memory:9998',
                'prob': 0.1,
                'times': 1,
                'pin': None
            },
            {
                'tool_name': 'memory_read',
                'tools_name': 'read_tools',
                'exec_mode': 'persistent',
                'aux_prompt': "",
                'arg': 'memory:9997',
                'prob': 0.1,
                'times': 1,
                'pin': None
            },
        ]
        self.workflows["workflow:main"] = {
            'pin': 'de',
            'stack': workflow_main,
            'title': "Main"
        }

        workflow_sub = [
            {
                'tool_name': 'subwork_done',
                'tools_name': 'default_tools',
                'exec_mode': 'persistent',
                'aux_prompt': "Read memory:9994, execute its instructions, and once the task is finished, call subwork_done.",
                'arg': None,
                'prob': 1.0,
                'times': 1,
                'pin': 'write'
            },
            {
                'tool_name': 'memory_read',
                'tools_name': 'read_tools',
                'exec_mode': 'persistent',
                'aux_prompt': "",
                'arg': 'memory:9994',
                'prob': 1.0,
                'times': 1,
                'pin': 'write'
            }
        ]
        self.workflows["workflow:1000"] = {
            'pin': 'wd',
            'stack': workflow_sub,
            'title': 'Keyword Update'
        }

    def update_keywords (self, text):
        extracted_keywords = []

        pattern1 = r'keyword:\s*(.*?)(?:\n|$)'
        matches1 = re.findall(pattern1, text, re.IGNORECASE)
        extracted_keywords.extend([kw.strip() for kw in matches1])

        pattern2 = r'\[keyword:\s*(.*?)\]'
        matches2 = re.findall(pattern2, text, re.IGNORECASE)
        extracted_keywords.extend([kw.strip() for kw in matches2])

        for keyword in extracted_keywords:
            if keyword.startswith("〜"):
                continue
            if keyword and keyword not in self.keywords:
                self.keywords.append(keyword)

    def update_vector (self, x):
        text = x['title'] + "\n\n" + x['text']
        x['vector'] = self.emb_llm.embed_query(text)

    def _create_backend_agent(self):
        """Creates the backend/sub-agent with specialized memory tools."""
        @tool
        def set_result(status: str, res: Union[Dict, List, str, int, float, bool, None]) -> None:
            """
            Sets the final result for the backend process.

            Args:
                status (str): The status of the process ('success' or 'error').
                res: The result data (can be any serializable type).
            """
            print(f"Tool2(set_result): status: {repr(status)}, result: {short_repr(res)}")
            self.backend_status = status
            self.backend_result = res

        @tool
        def read_all_memories() -> Dict[str, Any]:
            """Reads all stored memories available in the system (excluding vectors)."""
            print("Tool2(read_all_memories): Retrieving all data...")
            return {
                'status': 'success',
                'result': [{k: v for k, v in x.items() if k != 'vector'}
                           for x in self.memories.values()]
            }

        @tool
        def read_all_keywords() -> Dict[str, Any]:
            """Reads all keywords registered in the system."""
            print("Tool2(read_all_keywords): Retrieving all keywords...")
            return {
                'status': 'success',
                'result': [x for x in self.keywords]
            }

        @tool
        def express_thought(thought: str) -> None:
            """Expresses the backend agent's current thoughts or reasoning process."""
            mes = f"Thought expressed by backend: \"{thought}\""
            print(f"Tool2(express_thought): {mes}")

        tools = [set_result, read_all_memories, read_all_keywords, express_thought]

        app = create_agent(
            model=self.llm2,
            tools=tools,
            system_prompt=self.system_prompt2,
            checkpointer=InMemorySaver(),
            name="sub-agent",
        )

        return app

    def call_backend_agent(self, user_input: str) -> Dict[str, Any]:
        """Orchestrates the backend agent loop until a result is set."""
        config = {"configurable": {"thread_id": "2"}}
        app = self._create_backend_agent()
        self.messages2 = []
        self.backend_result = None
        self.backend_status = None

        while self.backend_result is None or self.backend_status is None:
            try:
                sleep(3)
                print(f"USER_INPUT2: {user_input}")
                self.messages2.append(HumanMessage(user_input))
                for chunk0 in app.stream(
                        {"messages": self.messages2.copy()},
                        config=config,
                        stream_mode="updates",
                        name="sub-agent",
                ):
                    self.messages2 = app.get_state(config).values["messages"].copy()
                    done = 0
                    if "tools" not in chunk0:
                        continue
                    for x in chunk0['tools']['messages']:
                        self.messages2ids.append(x.id)
                        if isinstance(x, ToolMessage):
                            print(f"Tool result 2({x.name}): {short_repr(x.content)}", flush=True)
                        if isinstance(x, ToolMessage) and x.name == "set_result":
                            done = 1
                            break
                    if done:
                        break
                print(f"Sub-Agent response: {get_content_text(self.messages2[-1].content)}")
            except GraphRecursionError:
                print(f"Recursion Limit reached in sub-agent.")
            except Exception as e:
                print(f"An error occurred (sub): {e}")
                import traceback
                traceback.print_exc()
                raise e

            sleep(3)

        return {'status': self.backend_status, 'result': self.backend_result}

The main RagAgent.

In [ ]:
class RagAgent(MemoryBanditWorkflow):
    """
    RagAgent: A specialized RAG agent designed to research and write a structured thesis.
    """
    def __init__(self, llm=None, llm2=None, emb_llm=None, save_file=None):
        # Initialize thesis structure
        self.thesis = {
            'title': "",
            'chapters': [{'title': 'Overview', 'text': ""}]
        }

        # Initialize base class
        super().__init__(llm=llm, llm2=llm2, emb_llm=emb_llm, save_file=save_file)

        # Primary System Prompt
        self.system_prompt = dedent("""\
        You are a clever RAG agent. You will be writing a full thesis (/thesis).
        Consider the current plan, policy, and surrounding circumstances, and update the plan and policy as necessary.
        Please leave a plan and policy that makes it easy for another agent to take the next action.

        Memory IDs follow the format 'memory:...'. Specifying only the number (e.g., '5555') instead of 'memory:5555' is insufficient. When referencing memory in text, write it as [memory:...].
        'Procedure for searching memory' is in [memory:9998]. 'Procedure for storing documents' is in [memory:9997].

        Many tools are stored as sub-tools. Sub-tool names start with '/', such as '/dir1/subtool1'. To see available sub-tools, first run subtool_show("/").

        The thesis (/thesis) is composed of chapters, each in Markdown format.
        The thesis title should be formatted with a single #, and chapter titles with ##. Write according to this hierarchy.
        Do NOT include internal memory references (like [memory:...]) in the final thesis text.
        When storing data in memory, keep track of reference URLs separately.
        Complete the thesis using sub-tools under the /thesis directory.
        """)

        # Secondary (Backend) System Prompt
        self.system_prompt2 = dedent("""\
        You are a backend agent supporting the clever RAG agent.
        While this backend is intended to be implemented using various techniques, it is currently in a testing phase, so you must simulate its behavior.

        Think carefully, use tools proactively, and follow the instructions from the Human.
        """)

        # Status and workflow variables
        self.subwork_done = True
        self.user_demand = ''
        self.current_work = ''
        self.current_state = 0

    def init_tools(self):
        """Initializes RAG-specific tools including thesis management and execution control."""
        super().init_tools()

        @tool
        def show_user_demand() -> str:
            """Returns the main objective/demand from the user."""
            print(f"Tool(show_user_demand): {self.user_demand}")
            return self.user_demand

        @tool
        def show_current_work() -> str:
            """Returns the current task or sub-objective."""
            print(f"Tool(show_current_work): {self.current_work}")
            return self.current_work

        @tool
        def thesis_write_title(new_title: str) -> str:
            """Updates the title of the thesis."""
            self.thesis['title'] = new_title
            mes = "Thesis title has been updated."
            print(f"Tool(thesis_write_title): {mes}: {new_title}")
            return mes

        @tool
        def thesis_show_title() -> str:
            """Returns the current thesis title."""
            print(f"Tool(thesis_show_title): {self.thesis['title']}")
            return self.thesis['title']

        @tool
        def thesis_new_chapter(title: str, text: str) -> str:
            """Creates a new chapter with the given title and text content."""
            x = {'title': title, 'text': text}
            num = len(self.thesis['chapters'])
            self.thesis['chapters'].append(x)
            print(f"Tool(thesis_new_chapter): {short_repr(x)}")
            return f"Success: Created Chapter {num}."

        @tool
        def thesis_write_chapter(chapter_num: int, new_title: str, new_text: str) -> str:
            """Replaces the content of an existing chapter with new_title and new_text."""
            if not (0 <= chapter_num < len(self.thesis['chapters'])):
                return f"Failure: Chapter {chapter_num} does not currently exist."
            x = {'title': new_title, 'text': new_text}
            self.thesis['chapters'][chapter_num] = x
            print(f"Tool(thesis_write_chapter): {short_repr(x)}")
            return f"Success: Rewrote Chapter {chapter_num}."

        @tool
        def thesis_delete_last_chapter() -> str:
            """Deletes the very last chapter of the thesis."""
            num = len(self.thesis['chapters']) - 1
            if num <= 0:
                return "Failure: Cannot delete further (must keep Overview)."
            self.thesis['chapters'].pop()
            return f"Success: Deleted Chapter {num}."

        @tool
        def thesis_read_chapter(chapter_num: int) -> Union[Dict[str, str], str]:
            """Reads the content of the specified chapter number."""
            if not (0 <= chapter_num < len(self.thesis['chapters'])):
                return f"Failure: Chapter {chapter_num} does not currently exist."
            return self.thesis['chapters'][chapter_num]

        @tool
        def thesis_list_chapters() -> List[str]:
            """Returns a list of all current chapter titles."""
            return [x['title'] for x in self.thesis['chapters']]

        @tool
        def subwork_done() -> str:
            """Declares that the assigned sub-task has been successfully completed."""
            self.subwork_done = True
            return "Success. Sub-task completion declared."

        @tool
        def subwork_not_done() -> str:
            """Declares that the assigned sub-task is NOT yet completed."""
            self.subwork_done = False
            return "Success. Sub-task declared incomplete."

        @tool
        def show_execution_map() -> str:
            """Returns the overall execution flow and the current state (0-6)."""
            mes = dedent(f"""\
            Main objective: "{self.user_demand}"

            Execution Map overview (Begins at State 0):
            Transitions occur when 'subwork_done' or 'subwork_not_done' is called as instructed.

            State 0: Plan strategy and outline in memory. -> Completion moves to State 1.
            State 1: Research via web and collect data chapter by chapter. Record findings in memory. -> Completion moves to State 2.
            State 2: Verify if collected data is sufficient for all chapters. -> Completion moves to State 3. Incompletion returns to State 1.
            State 3: Write thesis (/thesis) chapters based on memory data. -> Completion moves to State 4.
            State 4: Check if all chapters are complete. Finalize title and rewrite Overview. -> Completion moves to State 6. Incompletion moves to State 5.
            State 5: Perform additional research if necessary. -> Completion moves back to State 3.
            State 6: Finished! Complete.

            Current State: {self.current_state}
            """)
            print(f"Tool(show_execution_map): {mes}")
            return mes

        @tool
        def get_web_page_content(url: str) -> str:
            """
            Fetches and returns the text content of a specified URL.
            To be used after finding relevant URLs during investigation.
            """
            try:
                loader = WebBaseLoader(url)
                docs = loader.load()
                content = " ".join([getattr(doc, 'page_content', '') for doc in docs])
                print(f"Tool(get_web_page_content): Success: {url}")
                if len(content) > 5000:
                    return content[:5000] + "... (Content truncated as it was too long)"
                return content
            except Exception as e:
                print(f"Tool(get_web_page_content): Failure: {url}")
                return f"Error: Failed to load page. URL: {url}, Error: {e}"

        # Initialize and register core tools
        search_tool = DuckDuckGoSearchResults()
        main_tools = [subwork_done, subwork_not_done, search_tool, get_web_page_content]
        for t in main_tools:
            self.register_tool(t, tags=["default_tools", "read_tools", "all_tools"])

        # Register thesis management sub-tools
        thesis_subtools = [
            ('/thesis/write_title', thesis_write_title),
            ('/thesis/show_title', thesis_show_title),
            ('/thesis/new_chapter', thesis_new_chapter),
            ('/thesis/write_chapter', thesis_write_chapter),
            ('/thesis/delete_last_chapter', thesis_delete_last_chapter),
            ('/thesis/read_chapter', thesis_read_chapter),
            ('/thesis/list_chapters', thesis_list_chapters),
        ]
        self.register_subtools(
            directory="/thesis",
            subtools=thesis_subtools,
            description="Sub-tools for writing the thesis.",
            content=dedent("""\
            These are a collection of sub-tools for writing a thesis.

            The thesis (/thesis) is created by dividing it into chapters, each in Markdown format.
            The thesis title is formatted with a single #, and chapter titles are formatted with ##,
            so please write using the appropriate hierarchy.
            Internal memory references like [memory:...] must NOT be included in the thesis.
            Therefore, please record reference URLs separately when storing data in memory.

            Chapters begin with Chapter 0, which is always designated as the Overview.
            """),
            tags=["default_tools", "read_tools", "all_tools"]
        )

        # Register system control sub-tools
        sys_subtools = [
            ('/sys/show_user_demand', show_user_demand),
            ('/sys/show_current_work', show_current_work),
            ('/sys/show_execution_map', show_execution_map),
        ]
        self.register_subtools(
            directory="/sys",
            subtools=sys_subtools,
            tags=["default_tools", "read_tools", "all_tools"]
        )

    def main_loop (self, user_demand):
        self.user_demand = user_demand
        self.current_state = 0
        self.resume()

    def resume (self):
        if self.current_state == 0:
            self.execute_planning()
            self.current_state = 1
            self.save()
            print("\n\n----------\n\n")
        while self.current_state in [1, 2]:
            if self.current_state == 1:
                self.execute_investigation()
                self.current_state = 2
                self.save()
                print("\n\n----------\n\n")
            if self.current_state == 2:
                r = self.execute_check_of_investigation()
                if r:
                    self.current_state = 3
                else:
                    self.current_state = 1
                self.save()
                print("\n\n----------\n\n")
        while self.current_state in [3, 4, 5]:
            if self.current_state == 3:
                r = self.execute_writing()
                self.current_state = 4
                self.save()
                print("\n\n----------\n\n")
            if self.current_state == 4:
                r = self.execute_check_of_writing()
                if r:
                    self.current_state = 6
                else:
                    self.current_state = 5
                self.save()
                print("\n\n----------\n\n")
            if self.current_state == 5:
                self.execute_reinvestigation()
                self.current_state = 3
                self.save()
                print("\n\n----------\n\n")
        print("Done!")

    def write_loop (self):
        if self.current_state not in [3, 4, 5]:
            self.current_state = 3
        self.resume()

    def _execute(self, current_work: str, tool_name: str, aux_prompt: str) -> bool:
        """Helper to run a workflow step with specific instructions."""
        self.current_work = current_work
        user_input = f"""\
Main Objective: "{self.user_demand}"
Core Context: "{self.core_context}"
Plan and Policy: "{self.plan}"
Scratchpad: "{self.scratchpad}"
Please proceed while checking if necessary data is already recorded.
Current Task or Sub-objective: "{self.current_work}"
"""
        print(f"USER_INPUT: {user_input}")
        self.messages.append(HumanMessage(user_input))

        # Update the enforcement bandit for the specific sub-task
        self.workflows['workflow:main']['stack'][0] = {
            'tool_name': tool_name,
            'tools_name': 'all_tools',
            'exec_mode': 'persistent',
            'aux_prompt': aux_prompt,
            'arg': None,
            'prob': 1.0,
            'times': 1,
            'pin': 'write'
        }
        self.run("workflow:main")

        return self.subwork_done

    def execute_planning(self):
        return self._execute(
            current_work="Please create a plan in memory according to the main objective.",
            tool_name='subwork_done',
            aux_prompt="Call subwork_done once you have finished planning.",
        )

    def execute_investigation(self):
        return self._execute(
            current_work="Investigate the web, collect data, and record it in memory chapter by chapter for sections not yet covered.",
            tool_name='subwork_done',
            aux_prompt="Call subwork_done once you have finished researching one chapter.",
        )

    def execute_check_of_investigation(self):
        return self._execute(
            current_work="Verify if the research data stored in memory is sufficient for all chapters.",
            tool_name='subwork_done OR subwork_not_done',
            aux_prompt="Call subwork_done if research for all chapters is sufficient. Call subwork_not_done if you determine it is insufficient.",
        )

    def execute_writing(self):
        return self._execute(
            current_work="Write a chapter of the thesis (/thesis) that has not yet been written or is insufficient, based on the research data in memory. Note that Chapter 0 is designated as the Overview.",
            tool_name='subwork_done',
            aux_prompt="Call subwork_done once you have written one chapter of a high-quality thesis.",
        )

    def execute_check_of_writing(self):
        return self._execute(
            current_work="Check if all chapters of the thesis (/thesis) based on the research data are completed. Finally, set the title and rewrite the overview in Chapter 0.",
            tool_name='subwork_done OR subwork_not_done',
            aux_prompt="Call subwork_done once all chapters are written, the title is set, and the overview is rewritten. Call subwork_not_done if all chapters are not yet written.",
        )

    def execute_reinvestigation(self):
        return self._execute(
            current_work="If additional research is still needed, re-investigate and record findings in memory according to the main objective.",
            tool_name='subwork_done',
            aux_prompt="Call subwork_done after you have finished the additional research.",
        )

    def init_memories (self):
        super().init_memories()
        memories = []
        for x in memories:
            self.update_keywords(x['text'])
            self.memories[x['id']] = x
            self.update_vector(x)

    def init_workflows(self):
        """Sets up the initial workflow bandits for the RAG agent."""
        super().init_workflows()
        workflow_main = self.workflows['workflow:main']['stack']

        # Placeholder and system control bandits
        workflow_main = [
            {
                'tool_name': 'subwork_done',
                'tools_name': 'all_tools',
                'exec_mode': 'persistent',
                'aux_prompt': "Temporary registration.",
                'arg': None,
                'prob': 1.0,
                'times': 1,
                'pin': 'write'
            }
        ] + workflow_main + [
            {
                'tool_name': 'subtool_show',
                'tools_name': 'read_tools',
                'exec_mode': 'persistent',
                'aux_prompt': "",
                'arg': '/thesis',
                'prob': 0.1,
                'times': 1,
                'pin': None
            },
            {
                'tool_name': '/sys/show_execution_map',
                'tools_name': 'default_tools',
                'exec_mode': 'once',
                'aux_prompt': "",
                'arg': None,
                'prob': 1.0,
                'times': 1,
                'pin': None
            },
        ]
        self.workflows['workflow:main']['stack'] = workflow_main

    def display_thesis(self):
        """Renders the current thesis structure as a formatted Markdown string."""
        s = f"# {self.thesis['title']}\n\n"
        for i, x in enumerate(self.thesis['chapters']):
            # Skip if text already starts with a header
            if re.search(r"^\#\#", x['text']):
                pass
            # If overview or already includes a number/Chapter in title, use as is
            elif i == 0 or re.search(r"Chapter|Chapter\s+[0-9]+", x['title'], re.IGNORECASE):
                s += f"## {x['title']}\n\n"
            else:
                s += f"## Chapter {i}: {x['title']}\n\n"
            s += f"{x['text']}\n\n"
        return Markdown(s)

Instantiating the class.

In [ ]:
agent = RagAgent(llm=llm, llm2=llm, emb_llm=emb_llm, save_file=RAG_AGENT_SAVE)
agent.save()

To start from the middle, skip the previous code and execute only the following.

In [ ]:
agent = RagAgent.load(RAG_AGENT_SAVE, llm=llm, llm2=llm, emb_llm=emb_llm)

Now, let's run it.

In [ ]:
agent.main_loop("Please summarize into a thesis what should have been done during Japan's 'lost decades'.")
USER_INPUT: Main Objective: "Please summarize into a thesis what should have been done during Japan's 'lost decades'."
Core Context: ""
Plan and Policy: "Plan and policy have not been set yet."
Scratchpad: ""
Please proceed while checking if necessary data is already recorded.
Current Task or Sub-objective: "Please create a plan in memory according to the main objective."



----------


USER_INPUT: While using various tools for assistance, eventually use /sys/show_execution_map with appropriate parameters.
Tool result(subtool_show): '---\nname: /\ndescription: Sub-tool Root. Explains how to explore available...'
Tool result(subtool_show): '---\nname: /sys\ndescription: Essential system sub-tools.\nallowed-tools: N...'
Tool result(subtool_show): '---\nname: /thesis\ndescription: Sub-tools for writing the thesis.\nallowed...'
Tool result(memory_read): "Error: Memory ID 'memory:9999' not found."
Tool result(memory_list_recent): '{"status": "error", "result": "Error: No recent memories found."}'
Tool(show_execution_map): Main objective: "Please summarize into a thesis what should have been done during Japan's 'lost decades'."

Execution Map overview (Begins at State 0):
Transitions occur when 'subwork_done' or 'subwork_not_done' is called as instructed.

State 0: Plan strategy and outline in memory. -> Completion moves to State 1.
State 1: Research via web and collect data chapter by chapter. Record findings in memory. -> Completion moves to State 2.
State 2: Verify if collected data is sufficient for all chapters. -> Completion moves to State 3. Incompletion returns to State 1.
State 3: Write thesis (/thesis) chapters based on memory data. -> Completion moves to State 4.
State 4: Check if all chapters are complete. Finalize title and rewrite Overview. -> Completion moves to State 6. Incompletion moves to State 5.
State 5: Perform additional research if necessary. -> Completion moves back to State 3.
State 6: Finished! Complete.

Current State: 0

Tool result(/sys/show_execution_map): 'Main objective: "Please summarize into a thesis what should have been done ...'
Agent response: 
USER_INPUT: While using various tools for assistance, eventually use memory_read with appropriate parameters.
Tool(memory_new): {'id': 'memory:1000', 'title': "Thesis Plan: Japan's Lost Decades", 'accesse...}
Tool result(memory_new): 'memory:1000'
Tool(update_plan): Plan and policy updated.: Policy: 
1. Conduct deep research using DuckDuckGo and web content fetching to understand the economic nuances of Japan's Lost Decades.
2. Store gathered information in memory with clear titles and reference URLs.
3. Follow the chapter structure defined in [memory:1000].
4. Use the /thesis sub-tools to build the document.

Plan:
- Step 1: Research the initial response to the bubble collapse (late 80s - early 90s).
- Step 2: Research the banking crisis and the "zombie company" phenomenon.
- Step 3: Research monetary policy debates (liquidity trap, QE).
- Step 4: Write Chapter 0 (Overview).
- Step 5: Write Chapter 1 (Anatomy of Collapse).
- Step 6: Proceed through subsequent chapters.
- Step 7: Final review and completion.
Tool result(/sys/update_plan): 'Plan and policy updated.'
Tool(update_core): Core context updated.: Thesis Plan: memory:1000
Current Focus: Researching initial response to bubble collapse and banking crisis.
Tool result(/sys/update_core): 'Core context updated.'
Tool result(memory_read): '{"id": "memory:1000", "title": "Thesis Plan: Japan\'s Lost Decades", "acces...'
Agent response: 
USER_INPUT: While using various tools for assistance, eventually use memory_read with appropriate parameters.
Tool result(duckduckgo_results_json): "snippet: In conclusion, Japan's economic crisis and the subsequent Lost Dec..."
Tool result(duckduckgo_results_json): 'snippet: Japan\'s economic crisis, known as the " Lost Decade ," began in t...'
Tool result(duckduckgo_results_json): 'snippet: December 31, 2025 - As economist Paul Krugman explained, " Japan\'...'
Tool(get_web_page_content): Success: https://en.wikipedia.org/wiki/Lost_Decades
Tool result(get_web_page_content): "\n\n\n\nLost Decades - Wikipedia\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\..."
Tool(get_web_page_content): Success: https://en.wikipedia.org/wiki/Japanese_asset_price_bubble
Tool result(get_web_page_content): "\n\n\n\nJapanese asset price bubble - Wikipedia\n\n\n\n\n\n\n\n\n\n\n\n\n\n..."
Tool result(/thesis/write_title): 'Error: Invalid argument format.\nDetails: [{\'type\': \'missing\', \'loc\':...'
Tool(thesis_write_title): Thesis title has been updated.: Alternative Economic and Policy Measures for Japan's Lost Decades
Tool result(/thesis/write_title): 'Thesis title has been updated.'
Tool result(/thesis/new_chapter): 'Error: Invalid argument format.\nDetails: [{\'type\': \'missing\', \'loc\':...'
Recursion Limit reached.
USER_INPUT: While using various tools for assistance, eventually use memory_new OR memory_update_string OR memory_append_string (one of them) with appropriate parameters.
Tool result(subtool_show): '---\nname: /thesis/new_chapter\ndescription: thesis_new_chapter\nallowed-to...'
Tool(thesis_new_chapter): {'title': 'Overview', 'text': '## Chapter 0: Overview\n\nThe Japanese "Lost ...}
Tool result(/thesis/new_chapter): 'Success: Created Chapter 1.'
Tool(memory_new): {'id': 'memory:1001', 'title': 'Research Data: Japan Lost Decades Phase 1', ...}
Tool result(memory_new): 'memory:1001'
Agent response: 
USER_INPUT: While using various tools for assistance, eventually use subwork_done with appropriate parameters. (Auxiliary Prompt): Call subwork_done once you have finished planning.
Tool result(memory_read): '{"id": "memory:1000", "title": "Thesis Plan: Japan\'s Lost Decades", "acces...'
Tool(update_scratchpad): Scratchpad updated.: # Thesis Progress: Japan's Lost Decades

## Status
- Title: "Alternative Economic and Policy Measures for Japan's Lost Decades" (Set)
- Chapter 0: Overview (Completed)
- Phase 1 Research (Bubble/Banking Crisis): Completed & stored in [memory:1001]
- Phase 1 Research (Monetary/Fiscal/Structural): In progress

## Updated Plan
1. Research for Chapter 1: Deep dive into the asset bubble mechanics and the "convoy system" of the early 90s.
2. Draft Chapter 1: The Anatomy of the Collapse.
3. Research for Chapter 2 & 3: Monetary and Fiscal policies (Krugman's liquidity trap vs Koo's balance sheet recession).
4. Draft Chapter 2: Monetary Policy Missteps and Corrections.
5. Draft Chapter 3: The Fiscal Dilemma.
6. Research for Chapter 4: Structural Barriers.
7. Draft Chapter 4: Structural Barriers and Missed Opportunities.
8. Synthesize Chapter 5: A Counterfactual Roadmap.
9. Finalize Chapter 6: Conclusion.

## Policy
- Cite academic critiques (Bernanke, Posen, Koo, Krugman) to ground counterfactuals.
- Maintain Markdown hierarchy (# Title, ## Chapter Title).
- No [memory:xxxx] references in the final thesis text.
- Use /thesis/ sub-tools for all writing operations.
Tool result(update_scratchpad): 'Scratchpad updated.'
Tool result(subwork_done): 'Success. Sub-task completion declared.'
Agent response: 


----------


USER_INPUT: Main Objective: "Please summarize into a thesis what should have been done during Japan's 'lost decades'."
Core Context: "Thesis Plan: memory:1000
Current Focus: Researching initial response to bubble collapse and banking crisis."
Plan and Policy: "Policy: 
1. Conduct deep research using DuckDuckGo and web content fetching to understand the economic nuances of Japan's Lost Decades.
2. Store gathered information in memory with clear titles and reference URLs.
3. Follow the chapter structure defined in [memory:1000].
4. Use the /thesis sub-tools to build the document.

Plan:
- Step 1: Research the initial response to the bubble collapse (late 80s - early 90s).
- Step 2: Research the banking crisis and the "zombie company" phenomenon.
- Step 3: Research monetary policy debates (liquidity trap, QE).
- Step 4: Write Chapter 0 (Overview).
- Step 5: Write Chapter 1 (Anatomy of Collapse).
- Step 6: Proceed through subsequent chapters.
- Step 7: Final review and completion."
Scratchpad: "# Thesis Progress: Japan's Lost Decades

## Status
- Title: "Alternative Economic and Policy Measures for Japan's Lost Decades" (Set)
- Chapter 0: Overview (Completed)
- Phase 1 Research (Bubble/Banking Crisis): Completed & stored in [memory:1001]
- Phase 1 Research (Monetary/Fiscal/Structural): In progress

## Updated Plan
1. Research for Chapter 1: Deep dive into the asset bubble mechanics and the "convoy system" of the early 90s.
2. Draft Chapter 1: The Anatomy of the Collapse.
3. Research for Chapter 2 & 3: Monetary and Fiscal policies (Krugman's liquidity trap vs Koo's balance sheet recession).
4. Draft Chapter 2: Monetary Policy Missteps and Corrections.
5. Draft Chapter 3: The Fiscal Dilemma.
6. Research for Chapter 4: Structural Barriers.
7. Draft Chapter 4: Structural Barriers and Missed Opportunities.
8. Synthesize Chapter 5: A Counterfactual Roadmap.
9. Finalize Chapter 6: Conclusion.

## Policy
- Cite academic critiques (Bernanke, Posen, Koo, Krugman) to ground counterfactuals.
- Maintain Markdown hierarchy (# Title, ## Chapter Title).
- No [memory:xxxx] references in the final thesis text.
- Use /thesis/ sub-tools for all writing operations."
Please proceed while checking if necessary data is already recorded.
Current Task or Sub-objective: "Investigate the web, collect data, and record it in memory chapter by chapter for sections not yet covered."



----------


USER_INPUT: While using various tools for assistance, eventually use memory_read with appropriate parameters.
Tool result(memory_read): '{"id": "memory:1001", "title": "Research Data: Japan Lost Decades Phase 1",...'
Agent response: {"id": "memory:1001", "title": "Research Data: Japan Lost Decades Phase 1", "accesses": 1.0, "text": "### Research Log: Japan's Lost Decades (Step 1)\n\n**Key Findings on Bubble Collapse & Banking Crisis (1990-1995):**\n1. **The Policy Lag:** The Bank of Japan (BoJ) delayed tightening in 1987 due to Black Monday concerns, fueling the bubble. Post-collapse, the BoJ was slow to cut rates, fearing a rekindling of the bubble.\n2. **Banking Inertia:** The Ministry of Finance (MoF) and banks adopted a \"convoy system\" (goso sendan), hoping that time and economic recovery would fix non-performing loans (NPLs). This led to \"forbearance lending,\" creating zombie firms.\n3. **Monetary Missteps:** Interest rates were not cut aggressively enough. The discount rate remained at 6% in 1991 and only reached 0.5% by 1995.\n4. **Initial Fiscal Response:** While some stimulus was provided, it was often uncoordinated and focused on public works with low multipliers rather than bank recapitalization.\n\n**Counterfactual Arguments Identified:**\n- **Bernanke/Krugman:** BoJ should have targeted higher inflation expectations earlier to break the liquidity trap.\n- **Richard Koo:** Fiscal policy should have been even more sustained and aggressive to offset private sector deleveraging (Balance Sheet Recession).\n- **Adam Posen:** Faster, forced recognition of NPLs and bank closures/mergers were necessary to restore the credit channel.\n\n**Reference URLs:**\n- https://worldhistoryjournal.com/2025/03/13/understanding-japans-economic-crisis-key-lessons/\n- https://en.wikipedia.org/wiki/Lost_Decades\n- https://en.wikipedia.org/wiki/Japanese_asset_price_bubble\n- https://www.imf.org/en/publications/wp/issues/2025/11/06/macroeconomic-effects-and-spillovers-from-bank-of-japan-unconventional-monetary-policy-571620\n", "modified_at": "2026-02-12T17:28:58.072734"}
USER_INPUT: While using various tools for assistance, eventually use subwork_done with appropriate parameters. (Auxiliary Prompt): Call subwork_done once you have finished researching one chapter.
Tool result(duckduckgo_results_json): "snippet: That’s what then-Princeton professor Ben Bernanke did in 1999 , wh..."
Tool result(duckduckgo_results_json): 'snippet: ↑ Paul R. Krugman , Kathryn M. Dominquez, Kenneth Rogoff: It \' s ...'
Tool result(duckduckgo_results_json): "snippet: The Lost Decades are a lengthy period of economic stagnation in Ja..."
Tool result(duckduckgo_results_json): 'snippet: In 1997 , Prime Minister Ryutaro Hashimoto raised the consumption ...'
Tool(get_web_page_content): Success: https://web.mit.edu/krugman/www/bpea_jp.pdf
Tool result(get_web_page_content): '%PDF-1.2 \r\n%âãÏÓ\r\n \r\n7 0 obj\r\n<<\r\n/Length 8 0 R\r\n/Filter /LZWDe...'
Tool result(duckduckgo_results_json): 'snippet: December 31, 2025 - In his view, this avoided a U.S. ... their lia...'
Tool(memory_new): {'id': 'memory:1002', 'title': 'Research Data: Japan Lost Decades Phase 2', ...}
Tool result(memory_new): 'memory:1002'
Tool result(duckduckgo_results_json): 'snippet: Galapagos effect ( Galapagos syndrome ). Japan Galapagos effect ex...'
Tool result(duckduckgo_results_json): "snippet: In order to reduce the high labour cost, firms started to restrict..."
Tool result(duckduckgo_results_json): 'snippet: December 31, 2025 - This tendency, coinciding with the acceleratio...'
Tool(memory_new): {'id': 'memory:1003', 'title': 'Research Data: Japan Lost Decades Phase 3', ...}
Tool result(memory_new): 'memory:1003'
Tool(update_scratchpad): Scratchpad updated.: # Thesis Progress: Alternative Economic and Policy Measures for Japan's Lost Decades

## Status
- **Title & Overview (Chapter 0):** Completed.
- **Phase 1 Research (Bubble/Banking):** Completed [memory:1001].
- **Phase 2 Research (Monetary/Fiscal):** Completed [memory:1002].
- **Phase 3 Research (Structural/Synthesis):** Completed [memory:1003].
- **Chapter 1 Writing:** Next.

## Chapter Plan
- Chapter 0: Overview (Complete)
- Chapter 1: The Anatomy of the Collapse (Asset Bubble & Convoy System)
- Chapter 2: Monetary Policy Missteps (Liquidity Trap & Inflation Targeting)
- Chapter 3: The Fiscal Dilemma (Premature Austerity vs. Balance Sheet Recession)
- Chapter 4: Structural Barriers (Labor Duality & Zombie Firms)
- Chapter 5: A Counterfactual Roadmap (The "Alternative Manual")
- Chapter 6: Global Implications (Lessons for the West post-2008)
- Chapter 7: Conclusion

## Current Task
- Draft Chapter 1.
- Continue to Chapter 2.
Tool result(update_scratchpad): 'Scratchpad updated.'
Tool result(subwork_done): 'Success. Sub-task completion declared.'
Agent response: 


----------


USER_INPUT: Main Objective: "Please summarize into a thesis what should have been done during Japan's 'lost decades'."
Core Context: "Thesis Plan: memory:1000
Current Focus: Researching initial response to bubble collapse and banking crisis."
Plan and Policy: "Policy: 
1. Conduct deep research using DuckDuckGo and web content fetching to understand the economic nuances of Japan's Lost Decades.
2. Store gathered information in memory with clear titles and reference URLs.
3. Follow the chapter structure defined in [memory:1000].
4. Use the /thesis sub-tools to build the document.

Plan:
- Step 1: Research the initial response to the bubble collapse (late 80s - early 90s).
- Step 2: Research the banking crisis and the "zombie company" phenomenon.
- Step 3: Research monetary policy debates (liquidity trap, QE).
- Step 4: Write Chapter 0 (Overview).
- Step 5: Write Chapter 1 (Anatomy of Collapse).
- Step 6: Proceed through subsequent chapters.
- Step 7: Final review and completion."
Scratchpad: "# Thesis Progress: Alternative Economic and Policy Measures for Japan's Lost Decades

## Status
- **Title & Overview (Chapter 0):** Completed.
- **Phase 1 Research (Bubble/Banking):** Completed [memory:1001].
- **Phase 2 Research (Monetary/Fiscal):** Completed [memory:1002].
- **Phase 3 Research (Structural/Synthesis):** Completed [memory:1003].
- **Chapter 1 Writing:** Next.

## Chapter Plan
- Chapter 0: Overview (Complete)
- Chapter 1: The Anatomy of the Collapse (Asset Bubble & Convoy System)
- Chapter 2: Monetary Policy Missteps (Liquidity Trap & Inflation Targeting)
- Chapter 3: The Fiscal Dilemma (Premature Austerity vs. Balance Sheet Recession)
- Chapter 4: Structural Barriers (Labor Duality & Zombie Firms)
- Chapter 5: A Counterfactual Roadmap (The "Alternative Manual")
- Chapter 6: Global Implications (Lessons for the West post-2008)
- Chapter 7: Conclusion

## Current Task
- Draft Chapter 1.
- Continue to Chapter 2."
Please proceed while checking if necessary data is already recorded.
Current Task or Sub-objective: "Verify if the research data stored in memory is sufficient for all chapters."



----------


USER_INPUT: While using various tools for assistance, eventually use subtool_show with appropriate parameters. (Auxiliary Prompt): Please read the skill for /thesis.
Tool result(subtool_show): '---\nname: /thesis\ndescription: Sub-tools for writing the thesis.\nallowed...'
Agent response: ---
name: /thesis
description: Sub-tools for writing the thesis.
allowed-tools: No special permission is required to use this sub-skill.
---
These are a collection of sub-tools for writing a thesis.

The thesis (/thesis) is created by dividing it into chapters, each in Markdown format. 
The thesis title is formatted with a single #, and chapter titles are formatted with ##, 
so please write using the appropriate hierarchy. 
Internal memory references like [memory:...] must NOT be included in the thesis. 
Therefore, please record reference URLs separately when storing data in memory.

Chapters begin with Chapter 0, which is always designated as the Overview.

## Sub-tools


### Sub-tool: /thesis/write_title

[Sub-tool Name] /thesis/write_title
[Original Tool Name] thesis_write_title
[Original Usage] thesis_write_title(new_title)
[Description] Updates the title of the thesis.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/write_title", {"new_title": ...})

### Sub-tool: /thesis/show_title

[Sub-tool Name] /thesis/show_title
[Original Tool Name] thesis_show_title
[Original Usage] thesis_show_title()
[Description] Returns the current thesis title.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/show_title", {})

### Sub-tool: /thesis/new_chapter

[Sub-tool Name] /thesis/new_chapter
[Original Tool Name] thesis_new_chapter
[Original Usage] thesis_new_chapter(title, text)
[Description] Creates a new chapter with the given title and text content.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/new_chapter", {"title": ..., "text": ...})

### Sub-tool: /thesis/write_chapter

[Sub-tool Name] /thesis/write_chapter
[Original Tool Name] thesis_write_chapter
[Original Usage] thesis_write_chapter(chapter_num, new_title, new_text)
[Description] Replaces the content of an existing chapter with new_title and new_text.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/write_chapter", {"chapter_num": ..., "new_title": ..., "new_text": ...})

### Sub-tool: /thesis/delete_last_chapter

[Sub-tool Name] /thesis/delete_last_chapter
[Original Tool Name] thesis_delete_last_chapter
[Original Usage] thesis_delete_last_chapter()
[Description] Deletes the very last chapter of the thesis.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/delete_last_chapter", {})

### Sub-tool: /thesis/read_chapter

[Sub-tool Name] /thesis/read_chapter
[Original Tool Name] thesis_read_chapter
[Original Usage] thesis_read_chapter(chapter_num)
[Description] Reads the content of the specified chapter number.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/read_chapter", {"chapter_num": ...})

### Sub-tool: /thesis/list_chapters

[Sub-tool Name] /thesis/list_chapters
[Original Tool Name] thesis_list_chapters
[Original Usage] thesis_list_chapters()
[Description] Returns a list of all current chapter titles.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/list_chapters", {})

USER_INPUT: While using various tools for assistance, eventually use memory_read with appropriate parameters.
Tool result(memory_read): '{"id": "memory:1000", "title": "Thesis Plan: Japan\'s Lost Decades", "acces...'
Agent response: {"id": "memory:1000", "title": "Thesis Plan: Japan's Lost Decades", "accesses": 3.0, "text": "# Thesis Plan: Japan's Lost Decades - What Should Have Been Done\n\n## Phase 1: Information Gathering & Research\n1. Identify the key economic, social, and political factors of Japan's \"Lost Decades\" (roughly 1991-2021).\n2. Research the mainstream economic critiques (monetary policy, fiscal policy, structural reforms).\n3. Investigate specific alternative proposals and \"what-if\" scenarios from economists (e.g., Paul Krugman, Richard Koo, Joseph Stiglitz).\n4. Gather data on:\n    - Asset bubble collapse (1991).\n    - Banking crisis and delayed disposal of bad loans.\n    - Monetary policy (interest rates, QE).\n    - Fiscal policy (stimulus vs. austerity).\n    - Structural issues (demographics, labor market rigidity).\n\n## Phase 2: Thesis Structure Definition\n- **Chapter 0: Overview** (Summary of the thesis and core argument).\n- **Chapter 1: The Anatomy of the Collapse** (Context of the bubble and the initial response).\n- **Chapter 2: Monetary Policy Missteps and Corrections** (Discussion of the BoJ's actions).\n- **Chapter 3: The Fiscal Dilemma** (Stimulus efficiency and the debt trap).\n- **Chapter 4: Structural Barriers and Missed Opportunities** (Labor, innovation, and demographics).\n- **Chapter 5: A Counterfactual Roadmap** (Consolidated recommendations on what should have been done).\n- **Chapter 6: Conclusion** (Lessons for the future).\n\n## Phase 3: Writing & Review\n1. Draft chapters sequentially.\n2. Review for consistency and remove any internal memory references.\n3. Finalize the thesis.", "modified_at": "2026-02-12T17:27:14.797162"}
USER_INPUT: While using various tools for assistance, eventually use memory_read with appropriate parameters.
Tool result(/thesis/read_chapter): 'Error: Invalid argument format.\nDetails: [{\'type\': \'missing\', \'loc\':...'
Tool result(duckduckgo_results_json): 'snippet: 3 days ago - The trend continued throughout 1987 when it touched a...'
Tool result(/thesis/list_chapters): '["Overview", "Overview"]'
Tool result(/thesis/read_chapter): 'Error: Invalid argument format.\nDetails: [{\'type\': \'missing\', \'loc\':...'
Tool(thesis_show_title): Alternative Economic and Policy Measures for Japan's Lost Decades
Tool result(/thesis/show_title): "Alternative Economic and Policy Measures for Japan's Lost Decades"
Tool result(/thesis/read_chapter): 'Error: Invalid argument format.\nDetails: [{\'type\': \'missing\', \'loc\':...'
Tool(thesis_show_title): Alternative Economic and Policy Measures for Japan's Lost Decades
Tool result(/thesis/list_chapters): '["Overview", "Overview"]'
Tool result(/thesis/show_title): "Alternative Economic and Policy Measures for Japan's Lost Decades"
Tool result(duckduckgo_results_json): 'snippet: 1 week ago - Driven by speculative investments and excessive lendi...'
Tool result(/thesis/write_chapter): 'Error: Invalid argument format.\nDetails: [{\'type\': \'missing\', \'loc\':...'
Tool result(subtool_show): '---\nname: /thesis\ndescription: Sub-tools for writing the thesis.\nallowed...'
Recursion Limit reached.
USER_INPUT: The previous instruction has not been completed yet. Previous instruction: While using various tools for assistance, eventually use memory_read with appropriate parameters.
Tool result(memory_read): '{"id": "memory:1001", "title": "Research Data: Japan Lost Decades Phase 1",...'
Agent response: {"id": "memory:1002", "title": "Research Data: Japan Lost Decades Phase 2", "accesses": 2.0, "text": "### Research Log: Japan's Lost Decades (Step 2)\n\n**Chapter 2: Monetary Policy & The Liquidity Trap (1995-2006)**\n1. **The Liquidity Trap:** Paul Krugman (1998) argued that Japan was in a liquidity trap where nominal interest rates were near zero, but real rates remained too high because of deflationary expectations.\n2. **Krugman’s Solution:** The central bank must \"credibly promise to be irresponsible\"—i.e., commit to a higher inflation target (3-4%) to lower real interest rates.\n3. **Bernanke’s Critique (1999):** Bernanke criticized the BoJ for \"self-induced paralysis.\" He suggested aggressive QE, yen depreciation, and even direct cooperation with fiscal authorities (money-financed stimulus).\n4. **Historical Reality:** BoJ only introduced ZIRP in 1999 and QE in 2001, both of which were seen as \"too little, too late\" and often prematurely retracted (e.g., ending ZIRP in Aug 2000).\n\n**Chapter 3: The Fiscal Dilemma & Balance Sheet Recession**\n1. **Richard Koo’s Theory:** Private firms were not borrowing even at zero rates because they were focusing on debt repayment (deleveraging) after the bubble burst. This is a \"Balance Sheet Recession.\"\n2. **Koo’s Solution:** In such a state, the government must act as the \"borrower of last resort\" to maintain aggregate demand. Fiscal stimulus must be sustained until private balance sheets are repaired.\n3. **The 1997 Tax Hike:** Prime Minister Hashimoto raised the consumption tax from 3% to 5% and cut spending to reduce the deficit. This premature austerity triggered a deep recession and a banking crisis, proving catastrophic.\n4. **The 1998 Obuchi Stimulus:** After the 1997 disaster, the Obuchi administration launched a massive 24 trillion yen stimulus, which temporarily stabilized the economy but added significantly to national debt without solving structural issues.\n\n**Reference URLs:**\n- https://web.mit.edu/krugman/www/bpea_jp.pdf (Krugman 1998)\n- https://qz.com/29735/there-is-only-one-thing-that-can-save-japan-now-inflation (Bernanke 1999 summary)\n- https://en.wikipedia.org/wiki/Richard_Koo (Balance Sheet Recession)\n- https://global.chinadaily.com.cn/a/202309/13/WS6501a883a310d2dce4bb58e3.html (1997 Tax Hike)\n", "modified_at": "2026-02-12T17:30:30.514135"}
USER_INPUT: While using various tools for assistance, eventually use subwork_done OR subwork_not_done (one of them) with appropriate parameters. (Auxiliary Prompt): Call subwork_done if research for all chapters is sufficient. Call subwork_not_done if you determine it is insufficient.
Tool(update_scratchpad): Scratchpad updated.: # Thesis Progress
- Title: Alternative Economic and Policy Measures for Japan's Lost Decades
- Chapter 0: Overview (Completed, though duplicate entry in list_chapters suggests a need for cleanup)
- Chapter 1: Anatomy of the Collapse (Draft prepared, ready for write/new)
- Chapter 2: Monetary Policy (Research documented in [memory:1002])
- Chapter 3: Fiscal Dilemma (Research documented in [memory:1002], specific multiplier data needed)
- Chapter 4: Structural Barriers (Basic research in [memory:1003], needs more on labor market)
- Chapter 5: Counterfactual Roadmap (Synthesis phase)
- Chapter 6: Conclusion

# Technical Notes
- `subtool_do` requires exact key names (no extra quotes inside the string keys).
- `/thesis/write_chapter` requires `chapter_num`, `new_title`, and `new_text`.
- Current `list_chapters` output: ["Overview", "Overview"]. This implies index 0 and 1 are both "Overview". I should replace index 1 with "The Anatomy of the Collapse".

# Next Steps
1. Correct the duplication by writing Chapter 1.
2. Research fiscal multipliers in 1990s Japan.
3. Draft Chapter 2.
4. Update memory with new research.
Tool result(update_scratchpad): 'Scratchpad updated.'
Tool result(duckduckgo_results_json): "snippet: This literature review examines the wide-ranging estimates of the ..."
Tool result(/thesis/write_chapter): 'Error: Invalid argument format.\nDetails: [{\'type\': \'missing\', \'loc\':...'
Tool(thesis_write_chapter): {'title': 'The Anatomy of the Collapse', 'text': 'The collapse of the Japane...}
Tool result(/thesis/write_chapter): 'Success: Rewrote Chapter 1.'
Tool result(/thesis/new_chapter): "Error: Invalid argument format.\nDetails: [{'type': 'missing', 'loc': ('tex..."
Tool(thesis_new_chapter): {'title': 'Monetary Policy and the Liquidity Trap', 'text': 'By the mid-1990...}
Tool result(/thesis/new_chapter): 'Success: Created Chapter 2.'
Tool(thesis_new_chapter): {'title': 'The Fiscal Dilemma: Multipliers and Debt Overhang', 'text': 'Japa...}
Tool result(/thesis/new_chapter): 'Success: Created Chapter 3.'
Tool result(/thesis/list_chapters): '["Overview", "The Anatomy of the Collapse", "Monetary Policy and the Liquid...'
Tool result(/sys/duckduckgo_results_json): "Error: Sub-tool '/sys/duckduckgo_results_json' not found."
Recursion Limit reached.
USER_INPUT: The previous instruction has not been completed yet. Previous instruction: While using various tools for assistance, eventually use subwork_done OR subwork_not_done (one of them) with appropriate parameters. (Auxiliary Prompt): Call subwork_done if research for all chapters is sufficient. Call subwork_not_done if you determine it is insufficient.
Tool result(duckduckgo_results_json): 'snippet: A keiretsu ( Japanese : 系列; literally system , series, grouping of...'
Tool result(duckduckgo_results_json): 'snippet: If the older workers actually start to take the opportunity, it ma...'
Tool(memory_new): {'id': 'memory:1004', 'title': 'Research Log Phase 3: Structural Reforms', '...}
Tool result(memory_new): 'memory:1004'
Tool(update_scratchpad): Scratchpad updated.: Current Progress:
- Chapter 0: Complete.
- Chapter 1: Complete (The Anatomy of the Collapse).
- Chapter 2: Complete (Monetary Policy and the Liquidity Trap).
- Chapter 3: Complete (The Fiscal Dilemma: Multipliers and Debt Overhang).
- Research for Chapter 4 (Structural Barriers) is collected in [memory:1004].

Next Steps:
1. Draft Chapter 4: "Structural Barriers and the Rigidity of the Japanese Model."
2. Research for Chapter 5 (The 1997 Banking Crisis and Big Bang Reforms).
3. Research for Chapter 6 (Counterfactual Recap: What if?).
4. Research for Chapter 7 (Conclusion and Policy Implications).

Policy:
- Use /thesis tools for all writing and listing.
- Maintain academic tone and counterfactual focus.
- Avoid memory references in final text.
Tool result(update_scratchpad): 'Scratchpad updated.'
Tool result(subwork_not_done): 'Success. Sub-task declared incomplete.'
Agent response: 


----------


USER_INPUT: Main Objective: "Please summarize into a thesis what should have been done during Japan's 'lost decades'."
Core Context: "Thesis Plan: memory:1000
Current Focus: Researching initial response to bubble collapse and banking crisis."
Plan and Policy: "Policy: 
1. Conduct deep research using DuckDuckGo and web content fetching to understand the economic nuances of Japan's Lost Decades.
2. Store gathered information in memory with clear titles and reference URLs.
3. Follow the chapter structure defined in [memory:1000].
4. Use the /thesis sub-tools to build the document.

Plan:
- Step 1: Research the initial response to the bubble collapse (late 80s - early 90s).
- Step 2: Research the banking crisis and the "zombie company" phenomenon.
- Step 3: Research monetary policy debates (liquidity trap, QE).
- Step 4: Write Chapter 0 (Overview).
- Step 5: Write Chapter 1 (Anatomy of Collapse).
- Step 6: Proceed through subsequent chapters.
- Step 7: Final review and completion."
Scratchpad: "Current Progress:
- Chapter 0: Complete.
- Chapter 1: Complete (The Anatomy of the Collapse).
- Chapter 2: Complete (Monetary Policy and the Liquidity Trap).
- Chapter 3: Complete (The Fiscal Dilemma: Multipliers and Debt Overhang).
- Research for Chapter 4 (Structural Barriers) is collected in [memory:1004].

Next Steps:
1. Draft Chapter 4: "Structural Barriers and the Rigidity of the Japanese Model."
2. Research for Chapter 5 (The 1997 Banking Crisis and Big Bang Reforms).
3. Research for Chapter 6 (Counterfactual Recap: What if?).
4. Research for Chapter 7 (Conclusion and Policy Implications).

Policy:
- Use /thesis tools for all writing and listing.
- Maintain academic tone and counterfactual focus.
- Avoid memory references in final text."
Please proceed while checking if necessary data is already recorded.
Current Task or Sub-objective: "Investigate the web, collect data, and record it in memory chapter by chapter for sections not yet covered."



----------


USER_INPUT: While using various tools for assistance, eventually use subtool_show with appropriate parameters. (Auxiliary Prompt): Please read the skill for /thesis.
Tool result(subtool_show): '---\nname: /thesis\ndescription: Sub-tools for writing the thesis.\nallowed...'
Agent response: ---
name: /thesis
description: Sub-tools for writing the thesis.
allowed-tools: No special permission is required to use this sub-skill.
---
These are a collection of sub-tools for writing a thesis.

The thesis (/thesis) is created by dividing it into chapters, each in Markdown format. 
The thesis title is formatted with a single #, and chapter titles are formatted with ##, 
so please write using the appropriate hierarchy. 
Internal memory references like [memory:...] must NOT be included in the thesis. 
Therefore, please record reference URLs separately when storing data in memory.

Chapters begin with Chapter 0, which is always designated as the Overview.

## Sub-tools


### Sub-tool: /thesis/write_title

[Sub-tool Name] /thesis/write_title
[Original Tool Name] thesis_write_title
[Original Usage] thesis_write_title(new_title)
[Description] Updates the title of the thesis.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/write_title", {"new_title": ...})

### Sub-tool: /thesis/show_title

[Sub-tool Name] /thesis/show_title
[Original Tool Name] thesis_show_title
[Original Usage] thesis_show_title()
[Description] Returns the current thesis title.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/show_title", {})

### Sub-tool: /thesis/new_chapter

[Sub-tool Name] /thesis/new_chapter
[Original Tool Name] thesis_new_chapter
[Original Usage] thesis_new_chapter(title, text)
[Description] Creates a new chapter with the given title and text content.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/new_chapter", {"title": ..., "text": ...})

### Sub-tool: /thesis/write_chapter

[Sub-tool Name] /thesis/write_chapter
[Original Tool Name] thesis_write_chapter
[Original Usage] thesis_write_chapter(chapter_num, new_title, new_text)
[Description] Replaces the content of an existing chapter with new_title and new_text.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/write_chapter", {"chapter_num": ..., "new_title": ..., "new_text": ...})

### Sub-tool: /thesis/delete_last_chapter

[Sub-tool Name] /thesis/delete_last_chapter
[Original Tool Name] thesis_delete_last_chapter
[Original Usage] thesis_delete_last_chapter()
[Description] Deletes the very last chapter of the thesis.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/delete_last_chapter", {})

### Sub-tool: /thesis/read_chapter

[Sub-tool Name] /thesis/read_chapter
[Original Tool Name] thesis_read_chapter
[Original Usage] thesis_read_chapter(chapter_num)
[Description] Reads the content of the specified chapter number.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/read_chapter", {"chapter_num": ...})

### Sub-tool: /thesis/list_chapters

[Sub-tool Name] /thesis/list_chapters
[Original Tool Name] thesis_list_chapters
[Original Usage] thesis_list_chapters()
[Description] Returns a list of all current chapter titles.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/list_chapters", {})

USER_INPUT: While using various tools for assistance, eventually use memory_read with appropriate parameters.
Tool result(memory_read): '{"id": "memory:1000", "title": "Thesis Plan: Japan\'s Lost Decades", "acces...'
Agent response: {"id": "memory:1000", "title": "Thesis Plan: Japan's Lost Decades", "accesses": 4.0, "text": "# Thesis Plan: Japan's Lost Decades - What Should Have Been Done\n\n## Phase 1: Information Gathering & Research\n1. Identify the key economic, social, and political factors of Japan's \"Lost Decades\" (roughly 1991-2021).\n2. Research the mainstream economic critiques (monetary policy, fiscal policy, structural reforms).\n3. Investigate specific alternative proposals and \"what-if\" scenarios from economists (e.g., Paul Krugman, Richard Koo, Joseph Stiglitz).\n4. Gather data on:\n    - Asset bubble collapse (1991).\n    - Banking crisis and delayed disposal of bad loans.\n    - Monetary policy (interest rates, QE).\n    - Fiscal policy (stimulus vs. austerity).\n    - Structural issues (demographics, labor market rigidity).\n\n## Phase 2: Thesis Structure Definition\n- **Chapter 0: Overview** (Summary of the thesis and core argument).\n- **Chapter 1: The Anatomy of the Collapse** (Context of the bubble and the initial response).\n- **Chapter 2: Monetary Policy Missteps and Corrections** (Discussion of the BoJ's actions).\n- **Chapter 3: The Fiscal Dilemma** (Stimulus efficiency and the debt trap).\n- **Chapter 4: Structural Barriers and Missed Opportunities** (Labor, innovation, and demographics).\n- **Chapter 5: A Counterfactual Roadmap** (Consolidated recommendations on what should have been done).\n- **Chapter 6: Conclusion** (Lessons for the future).\n\n## Phase 3: Writing & Review\n1. Draft chapters sequentially.\n2. Review for consistency and remove any internal memory references.\n3. Finalize the thesis.", "modified_at": "2026-02-12T17:27:14.797162"}
USER_INPUT: While using various tools for assistance, eventually use memory_read with appropriate parameters.
Tool result(/thesis/read_chapter): '{"title": "The Fiscal Dilemma: Multipliers and Debt Overhang", "text": "Jap...'
Tool result(memory_read): '{"id": "memory:1004", "title": "Research Log Phase 3: Structural Reforms", ...'
Agent response: 
USER_INPUT: While using various tools for assistance, eventually use subwork_done with appropriate parameters. (Auxiliary Prompt): Call subwork_done once you have finished researching one chapter.
Tool(thesis_new_chapter): {'title': 'Structural Barriers and the Rigidity of the Japanese Model', 'tex...}
Tool result(/thesis/new_chapter): 'Success: Created Chapter 4.'
Tool result(duckduckgo_results_json): "snippet: The Lost Decades are a lengthy period of economic stagnation in Ja..."
Tool result(duckduckgo_results_json): "snippet: December 31, 2025 - The Lost Decades are a lengthy period of econo..."
Tool result(duckduckgo_results_json): 'snippet: December 31, 2025 - The Lost Decades are a lengthy period of econo...'
Tool result(duckduckgo_results_json): "snippet: 2002 Preventing deflation : lessons from Japan's experience in the..."
Tool(memory_new): {'id': 'memory:1005', 'title': 'Research Log: Counterfactual Scenarios', 'ac...}
Tool result(memory_new): 'memory:1005'
Tool(update_scratchpad): Scratchpad updated.: - Chapters 0-4 are finalized and stored in the /thesis sub-tool.
- Chapter 5 research (Counterfactual Roadmap) is completed and stored in [memory:1005].
- Next Step: Draft Chapter 5 using the synthesis in [memory:1005].
- Chapter 6 (Conclusion) research is still needed.
Tool result(update_scratchpad): 'Scratchpad updated.'
Tool result(subwork_done): 'Success. Sub-task completion declared.'
Agent response: 


----------


USER_INPUT: Main Objective: "Please summarize into a thesis what should have been done during Japan's 'lost decades'."
Core Context: "Thesis Plan: memory:1000
Current Focus: Researching initial response to bubble collapse and banking crisis."
Plan and Policy: "Policy: 
1. Conduct deep research using DuckDuckGo and web content fetching to understand the economic nuances of Japan's Lost Decades.
2. Store gathered information in memory with clear titles and reference URLs.
3. Follow the chapter structure defined in [memory:1000].
4. Use the /thesis sub-tools to build the document.

Plan:
- Step 1: Research the initial response to the bubble collapse (late 80s - early 90s).
- Step 2: Research the banking crisis and the "zombie company" phenomenon.
- Step 3: Research monetary policy debates (liquidity trap, QE).
- Step 4: Write Chapter 0 (Overview).
- Step 5: Write Chapter 1 (Anatomy of Collapse).
- Step 6: Proceed through subsequent chapters.
- Step 7: Final review and completion."
Scratchpad: "- Chapters 0-4 are finalized and stored in the /thesis sub-tool.
- Chapter 5 research (Counterfactual Roadmap) is completed and stored in [memory:1005].
- Next Step: Draft Chapter 5 using the synthesis in [memory:1005].
- Chapter 6 (Conclusion) research is still needed."
Please proceed while checking if necessary data is already recorded.
Current Task or Sub-objective: "Verify if the research data stored in memory is sufficient for all chapters."



----------


USER_INPUT: While using various tools for assistance, eventually use memory_read with appropriate parameters.
Tool result(memory_read): '{"id": "memory:1000", "title": "Thesis Plan: Japan\'s Lost Decades", "acces...'
Agent response: {"id": "memory:1000", "title": "Thesis Plan: Japan's Lost Decades", "accesses": 5.0, "text": "# Thesis Plan: Japan's Lost Decades - What Should Have Been Done\n\n## Phase 1: Information Gathering & Research\n1. Identify the key economic, social, and political factors of Japan's \"Lost Decades\" (roughly 1991-2021).\n2. Research the mainstream economic critiques (monetary policy, fiscal policy, structural reforms).\n3. Investigate specific alternative proposals and \"what-if\" scenarios from economists (e.g., Paul Krugman, Richard Koo, Joseph Stiglitz).\n4. Gather data on:\n    - Asset bubble collapse (1991).\n    - Banking crisis and delayed disposal of bad loans.\n    - Monetary policy (interest rates, QE).\n    - Fiscal policy (stimulus vs. austerity).\n    - Structural issues (demographics, labor market rigidity).\n\n## Phase 2: Thesis Structure Definition\n- **Chapter 0: Overview** (Summary of the thesis and core argument).\n- **Chapter 1: The Anatomy of the Collapse** (Context of the bubble and the initial response).\n- **Chapter 2: Monetary Policy Missteps and Corrections** (Discussion of the BoJ's actions).\n- **Chapter 3: The Fiscal Dilemma** (Stimulus efficiency and the debt trap).\n- **Chapter 4: Structural Barriers and Missed Opportunities** (Labor, innovation, and demographics).\n- **Chapter 5: A Counterfactual Roadmap** (Consolidated recommendations on what should have been done).\n- **Chapter 6: Conclusion** (Lessons for the future).\n\n## Phase 3: Writing & Review\n1. Draft chapters sequentially.\n2. Review for consistency and remove any internal memory references.\n3. Finalize the thesis.", "modified_at": "2026-02-12T17:27:14.797162"}
USER_INPUT: While using various tools for assistance, eventually use memory_read with appropriate parameters.
Tool result(memory_read): '{"id": "memory:1004", "title": "Research Log Phase 3: Structural Reforms", ...'
Agent response: {"id": "memory:1005", "title": "Research Log: Counterfactual Scenarios", "accesses": 1.0, "text": "Research for Chapter 5: A Counterfactual Roadmap.\n\nKey findings:\n1. **Monetary Policy (Ahearne et al., 2002):** Analysis suggests that if the Bank of Japan had lowered interest rates by an additional 200 basis points starting in 1991, the deflationary spiral could have been averted. The delay in recognizing the depth of the crisis led to the Zero Lower Bound (ZLB) trap.\n2. **Banking and NPLs:** A more aggressive cleanup of Non-Performing Loans (NPLs) in 1992-1993, supported by public fund injections (recapitalization), would have restored the credit mediation function of banks years earlier. The 1997-1998 banking crisis was a direct result of the \"Convoy System\" delaying this disposal.\n3. **Fiscal Policy (Posen, 1998):** Adam Posen argued that fiscal stimulus was effective when actually implemented, but was often \"smoke and mirrors\" (announcement effects without real spending). A consistent, large-scale stimulus that avoided the 1997 tax hike would have bridged the gap in aggregate demand.\n4. **Structural Timing:** Implementing labor market reforms (increasing mobility) in the early 90s, before the \"Lost Generation\" became entrenched, would have mitigated the long-term scarring of the workforce.\n5. **Koo vs. Krugman:** Koo emphasizes that fiscal spending should have been maintained until private sectors started borrowing again, while Krugman emphasizes the need for an inflation target to lower real interest rates.\n\nReference URLs:\n- https://ideas.repec.org/p/fip/fedgif/729.html (Ahearne et al., 2002)\n- https://www.iie.com/publications/books/restoring-japans-economic-growth (Posen, 1998)\n- https://www.investopedia.com/terms/b/balance-sheet-recession.asp (Koo's Framework)", "modified_at": "2026-02-12T17:37:58.950329"}
USER_INPUT: While using various tools for assistance, eventually use memory_new OR memory_update_string OR memory_append_string (one of them) with appropriate parameters.
Tool result(memory_append_string): "Success: Appended text to memory ID 'memory:1005'."
Agent response: 
USER_INPUT: While using various tools for assistance, eventually use subwork_done OR subwork_not_done (one of them) with appropriate parameters. (Auxiliary Prompt): Call subwork_done if research for all chapters is sufficient. Call subwork_not_done if you determine it is insufficient.
Tool result(duckduckgo_results_json): "snippet: The total amount of loans held by all banks as of the end of Septe..."
Tool result(duckduckgo_results_json): "snippet: The total amount of loans held by all banks as of the end of Septe..."
Tool result(duckduckgo_results_json): "snippet: Mar 9, 1999 . The 15 banks applying for public money will together..."
Tool result(subwork_done): 'Success. Sub-task completion declared.'
Agent response: 


----------


USER_INPUT: Main Objective: "Please summarize into a thesis what should have been done during Japan's 'lost decades'."
Core Context: "Thesis Plan: memory:1000
Current Focus: Researching initial response to bubble collapse and banking crisis."
Plan and Policy: "Policy: 
1. Conduct deep research using DuckDuckGo and web content fetching to understand the economic nuances of Japan's Lost Decades.
2. Store gathered information in memory with clear titles and reference URLs.
3. Follow the chapter structure defined in [memory:1000].
4. Use the /thesis sub-tools to build the document.

Plan:
- Step 1: Research the initial response to the bubble collapse (late 80s - early 90s).
- Step 2: Research the banking crisis and the "zombie company" phenomenon.
- Step 3: Research monetary policy debates (liquidity trap, QE).
- Step 4: Write Chapter 0 (Overview).
- Step 5: Write Chapter 1 (Anatomy of Collapse).
- Step 6: Proceed through subsequent chapters.
- Step 7: Final review and completion."
Scratchpad: "- Chapters 0-4 are finalized and stored in the /thesis sub-tool.
- Chapter 5 research (Counterfactual Roadmap) is completed and stored in [memory:1005].
- Next Step: Draft Chapter 5 using the synthesis in [memory:1005].
- Chapter 6 (Conclusion) research is still needed."
Please proceed while checking if necessary data is already recorded.
Current Task or Sub-objective: "Write a chapter of the thesis (/thesis) that has not yet been written or is insufficient, based on the research data in memory. Note that Chapter 0 is designated as the Overview."



----------


USER_INPUT: While using various tools for assistance, eventually use memory_read with appropriate parameters. (Auxiliary Prompt): Please read memory:9998.
Tool result(memory_read): '{"id": "memory:9998", "title": "Procedure for searching memory documents", ...'
Agent response: {"id": "memory:9998", "title": "Procedure for searching memory documents", "accesses": 0.3, "modified_at": "2023-01-01T00:00:00", "text": "First, use 'express_thought' to consider what kind of information you want to find.\n\nThen, associate related keywords using 'imagine_keywords'.\n\nFollowing those results, try 'memory_words_search' or 'memory_semantic_search'.\n"}
USER_INPUT: While using various tools for assistance, eventually use memory_read with appropriate parameters.
Tool result(memory_read): '{"id": "memory:1000", "title": "Thesis Plan: Japan\'s Lost Decades", "acces...'
Agent response: 
USER_INPUT: While using various tools for assistance, eventually use subwork_done with appropriate parameters. (Auxiliary Prompt): Call subwork_done once you have written one chapter of a high-quality thesis.
Tool(express_thought): Thought expressed: "I need to clarify the structure of the remaining chapters. According to the summary, Chapter 5 should cover the 1997 Banking Crisis and the 1998 interventions. However, the scratchpad mentions Chapter 5 as the 'Counterfactual Roadmap'. I will check [memory:1000] to ensure the chapter numbering matches the intended outline before writing. Once confirmed, I will draft the next chapter using the sub-tools."
Tool result(express_thought): 'null'
Tool result(memory_read): '{"id": "memory:1000", "title": "Thesis Plan: Japan\'s Lost Decades", "acces...'
Tool result(subtool_show): '---\nname: /thesis\ndescription: Sub-tools for writing the thesis.\nallowed...'
Tool result(/thesis/list_chapters): '["Overview", "The Anatomy of the Collapse", "Monetary Policy and the Liquid...'
Tool result(memory_read): '{"id": "memory:1001", "title": "Research Data: Japan Lost Decades Phase 1",...'
Tool result(memory_read): '{"id": "memory:1005", "title": "Research Log: Counterfactual Scenarios", "a...'
Tool(thesis_show_title): Alternative Economic and Policy Measures for Japan's Lost Decades
Tool result(/thesis/show_title): "Alternative Economic and Policy Measures for Japan's Lost Decades"
Tool result(memory_read): '{"id": "memory:1004", "title": "Research Log Phase 3: Structural Reforms", ...'
Tool result(duckduckgo_results_json): 'snippet: The FRC is set to approve the infusion of 7.46 trillion yen into t...'
Recursion Limit reached.
USER_INPUT: The previous instruction has not been completed yet. Previous instruction: While using various tools for assistance, eventually use subwork_done with appropriate parameters. (Auxiliary Prompt): Call subwork_done once you have written one chapter of a high-quality thesis.
Tool(thesis_new_chapter): {'title': 'A Counterfactual Roadmap', 'text': '## Chapter 5: A Counterfactua...}
Tool result(/thesis/new_chapter): 'Success: Created Chapter 5.'
Tool result(subwork_done): 'Success. Sub-task completion declared.'
Agent response: Success. Sub-task completion declared.


----------


USER_INPUT: Main Objective: "Please summarize into a thesis what should have been done during Japan's 'lost decades'."
Core Context: "Thesis Plan: memory:1000
Current Focus: Researching initial response to bubble collapse and banking crisis."
Plan and Policy: "Policy: 
1. Conduct deep research using DuckDuckGo and web content fetching to understand the economic nuances of Japan's Lost Decades.
2. Store gathered information in memory with clear titles and reference URLs.
3. Follow the chapter structure defined in [memory:1000].
4. Use the /thesis sub-tools to build the document.

Plan:
- Step 1: Research the initial response to the bubble collapse (late 80s - early 90s).
- Step 2: Research the banking crisis and the "zombie company" phenomenon.
- Step 3: Research monetary policy debates (liquidity trap, QE).
- Step 4: Write Chapter 0 (Overview).
- Step 5: Write Chapter 1 (Anatomy of Collapse).
- Step 6: Proceed through subsequent chapters.
- Step 7: Final review and completion."
Scratchpad: "- Chapters 0-4 are finalized and stored in the /thesis sub-tool.
- Chapter 5 research (Counterfactual Roadmap) is completed and stored in [memory:1005].
- Next Step: Draft Chapter 5 using the synthesis in [memory:1005].
- Chapter 6 (Conclusion) research is still needed."
Please proceed while checking if necessary data is already recorded.
Current Task or Sub-objective: "Check if all chapters of the thesis (/thesis) based on the research data are completed. Finally, set the title and rewrite the overview in Chapter 0."



----------


USER_INPUT: While using various tools for assistance, eventually use subtool_show with appropriate parameters. (Auxiliary Prompt): Please read the skill for /thesis.
Tool result(subtool_show): '---\nname: /thesis\ndescription: Sub-tools for writing the thesis.\nallowed...'
Agent response: ---
name: /thesis
description: Sub-tools for writing the thesis.
allowed-tools: No special permission is required to use this sub-skill.
---
These are a collection of sub-tools for writing a thesis.

The thesis (/thesis) is created by dividing it into chapters, each in Markdown format. 
The thesis title is formatted with a single #, and chapter titles are formatted with ##, 
so please write using the appropriate hierarchy. 
Internal memory references like [memory:...] must NOT be included in the thesis. 
Therefore, please record reference URLs separately when storing data in memory.

Chapters begin with Chapter 0, which is always designated as the Overview.

## Sub-tools


### Sub-tool: /thesis/write_title

[Sub-tool Name] /thesis/write_title
[Original Tool Name] thesis_write_title
[Original Usage] thesis_write_title(new_title)
[Description] Updates the title of the thesis.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/write_title", {"new_title": ...})

### Sub-tool: /thesis/show_title

[Sub-tool Name] /thesis/show_title
[Original Tool Name] thesis_show_title
[Original Usage] thesis_show_title()
[Description] Returns the current thesis title.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/show_title", {})

### Sub-tool: /thesis/new_chapter

[Sub-tool Name] /thesis/new_chapter
[Original Tool Name] thesis_new_chapter
[Original Usage] thesis_new_chapter(title, text)
[Description] Creates a new chapter with the given title and text content.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/new_chapter", {"title": ..., "text": ...})

### Sub-tool: /thesis/write_chapter

[Sub-tool Name] /thesis/write_chapter
[Original Tool Name] thesis_write_chapter
[Original Usage] thesis_write_chapter(chapter_num, new_title, new_text)
[Description] Replaces the content of an existing chapter with new_title and new_text.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/write_chapter", {"chapter_num": ..., "new_title": ..., "new_text": ...})

### Sub-tool: /thesis/delete_last_chapter

[Sub-tool Name] /thesis/delete_last_chapter
[Original Tool Name] thesis_delete_last_chapter
[Original Usage] thesis_delete_last_chapter()
[Description] Deletes the very last chapter of the thesis.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/delete_last_chapter", {})

### Sub-tool: /thesis/read_chapter

[Sub-tool Name] /thesis/read_chapter
[Original Tool Name] thesis_read_chapter
[Original Usage] thesis_read_chapter(chapter_num)
[Description] Reads the content of the specified chapter number.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/read_chapter", {"chapter_num": ...})

### Sub-tool: /thesis/list_chapters

[Sub-tool Name] /thesis/list_chapters
[Original Tool Name] thesis_list_chapters
[Original Usage] thesis_list_chapters()
[Description] Returns a list of all current chapter titles.

*Note: To execute this tool, do not call it directly. You must use subtool_do as shown below:*
[Correct Usage] subtool_do("/thesis/list_chapters", {})

USER_INPUT: While using various tools for assistance, eventually use memory_read with appropriate parameters.
Tool result(memory_read): '{"id": "memory:1000", "title": "Thesis Plan: Japan\'s Lost Decades", "acces...'
Agent response: 
USER_INPUT: While using various tools for assistance, eventually use memory_new OR memory_update_string OR memory_append_string (one of them) with appropriate parameters.
Tool result(/thesis/list_chapters): '["Overview", "The Anatomy of the Collapse", "Monetary Policy and the Liquid...'
Tool result(memory_read): '{"id": "memory:1000", "title": "Thesis Plan: Japan\'s Lost Decades", "acces...'
USER_INPUT2: Simulate a full-text search across all memories with search_str = 'lessons OR future OR conclusion'. 
Support OR and parentheses logic.
Use actual memory data obtained from 'read_all_memories' or 'read_all_keywords'.
Return results using the 'set_result' tool.

Status: 'error' or 'success'
Result: List of Match data (m) dictionaries:
  m['id']: Memory ID (memory:...)
  m['title']: Memory Title
  m['snippet']: Contextual snippet of text surrounding the match.

Tool2(read_all_memories): Retrieving all data...
Tool result 2(read_all_memories): '{"status": "success", "result": [{"id": "memory:9998", "title": "Procedure ...'
Tool2(read_all_keywords): Retrieving all keywords...
Tool result 2(read_all_keywords): '{"status": "success", "result": ["... : Specify keywords related to that me...'
Tool2(set_result): status: 'success', result: [{'snippet': '...Note that keywords can effectively serve as links to future...]
Tool result 2(set_result): 'null'
Sub-Agent response: null
Tool result(memory_words_search): '{"status": "success", "result": [{"snippet": "...Note that keywords can eff...'
Tool result(memory_read): '{"id": "memory:1001", "title": "Research Data: Japan Lost Decades Phase 1",...'
Tool result(duckduckgo_results_json): 'snippet: Japan\'s economic crisis, known as the " Lost Decade ," began in t...'
Tool(thesis_new_chapter): {'title': 'Conclusion: Lessons for the Global Economy', 'text': '## Chapter ...}
Tool result(/thesis/new_chapter): 'Success: Created Chapter 6.'
Tool result(/thesis/read_chapter): '{"title": "Overview", "text": ""}'
Tool result(/thesis/write_chapter): 'Error: Invalid argument format.\nDetails: [{\'type\': \'missing\', \'loc\':...'
Recursion Limit reached.
USER_INPUT: While using various tools for assistance, eventually use subwork_done OR subwork_not_done (one of them) with appropriate parameters. (Auxiliary Prompt): Call subwork_done once all chapters are written, the title is set, and the overview is rewritten. Call subwork_not_done if all chapters are not yet written.
Tool(thesis_write_chapter): {'title': 'Overview', 'text': '## Chapter 0: Overview\n\nThis thesis investi...}
Tool result(/thesis/write_chapter): 'Success: Rewrote Chapter 0.'
Tool result(/thesis/set_title): "Error: Sub-tool '/thesis/set_title' not found."
Tool result(subtool_show): '---\nname: /thesis\ndescription: Sub-tools for writing the thesis.\nallowed...'
Tool(thesis_write_title): Thesis title has been updated.: Alternative Economic and Policy Measures for Japan's Lost Decades
Tool result(/thesis/write_title): 'Thesis title has been updated.'
Tool result(/thesis/list_chapters): '["Overview", "The Anatomy of the Collapse", "Monetary Policy and the Liquid...'
Tool result(subwork_done): 'Success. Sub-task completion declared.'
Agent response: 


----------


Done!

If it hangs after get_web_page_content, stop it and call resume(). For some reason, this happens very rarely. It was not necessary this time.

In [ ]:
agent.resume()

The completed thesis.

In [ ]:
agent.display_thesis()
Out[ ]:

Alternative Economic and Policy Measures for Japan's Lost Decades¶

Chapter 0: Overview¶

This thesis investigates the causes, consequences, and missed opportunities of Japan's "Lost Decades"—the period of economic stagnation that followed the collapse of the asset price bubble in the early 1990s. While the crisis is often attributed to external shocks or demographic shifts, this work argues that the primary drivers were institutional rigidities and delayed policy responses.

The study employs a counterfactual framework to analyze how alternative fiscal, monetary, and structural interventions could have fundamentally altered Japan's economic trajectory.

Structure of the Thesis¶

  • Chapter 1 (Anatomy of the Collapse): Traces the origins of the 1980s bubble and the immediate policy failures following its 1991 burst.
  • Chapter 2 (Monetary Policy): Analyzes the Bank of Japan's delayed response and the subsequent descent into a liquidity trap.
  • Chapter 3 (The Fiscal Dilemma): Examines the efficacy of public works spending versus the burden of rising national debt.
  • Chapter 4 (Structural Barriers): Investigates the "Membership Model" of labor and the "Convoy System" in finance that hindered modernization.
  • Chapter 5 (A Counterfactual Roadmap): Presents a "what-if" scenario where aggressive intervention in the early 1990s prevents the long-term stagnation.
  • Chapter 6 (Conclusion): Synthesizes the findings and discusses the broader lessons for global economies facing similar deflationary pressures today.

The central thesis is that Japan’s stagnation was not an inevitable fate but a policy-induced malaise. By examining these failures, we can derive critical lessons for avoiding the "Japanification" of the global economy.

Chapter 1: The Anatomy of the Collapse¶

The collapse of the Japanese asset price bubble in the early 1990s was not merely a market correction but a systemic failure that exposed deep structural flaws in the nation's financial and regulatory architecture.

The Height of Exuberance¶

At the end of December 1989, the Nikkei 225 Stock Average reached an all-time high of 38,957.44. Real estate prices in Tokyo were so inflated that the land under the Imperial Palace was famously said to be worth more than all the land in California. This period was characterized by aggressive corporate expansion and speculative lending, fueled by low interest rates and a belief in the "land myth"—the idea that Japanese property values would never decrease.

The Trigger: Monetary Tightening¶

The Bank of Japan, under Governor Yasushi Mieno, began a series of aggressive interest rate hikes starting in late 1989 to curb inflation and cool the overheated markets. By August 1990, the official discount rate had risen to 6.0%. While these measures were intended to orchestrate a "soft landing," they instead triggered a precipitous crash. By 1992, the Nikkei had lost over 60% of its value, and land prices began a multi-decade decline.

The Institutional Failure: The "Convoy System"¶

The most critical failure during the immediate post-bubble years was the Ministry of Finance's "convoy system" (Goso Sendan). This regulatory approach aimed to ensure that no bank or financial institution would fail. Instead of forcing banks to recognize and write off bad loans, the ministry encouraged "evergreening"—providing more loans to insolvent borrowers to keep them from defaulting. This preserved "zombie firms" and prevented the creative destruction necessary for economic renewal.

The Policy Delay (1991–1994)¶

Throughout the early 1990s, policymakers operated under the assumption that the downturn was a cyclical recession. Consequently, fiscal and monetary responses were reactive rather than proactive. The delay in recapitalizing the banking sector until the late 1990s allowed the bad debt problem to metastasize, turning a sharp correction into a chronic stagnation.

Chapter 2: Monetary Policy and the Liquidity Trap¶

By the mid-1990s, the Bank of Japan (BoJ) found itself in uncharted territory as short-term interest rates approached the zero lower bound. This chapter examines the theoretical and practical challenges of conducting monetary policy in a liquidity trap.

The Liquidity Trap and Real Interest Rates¶

As Paul Krugman argued in 1998, a liquidity trap occurs when nominal interest rates are so low that they cannot be reduced further, yet the economy remains depressed. In Japan's case, persistent deflationary expectations meant that even with a nominal rate near zero, the real interest rate (nominal rate minus inflation) remained too high to stimulate investment.

"Self-Induced Paralysis"¶

Ben Bernanke (1999) criticized the BoJ for what he termed "self-induced paralysis." He argued that the BoJ had the tools to combat deflation but lacked the will to use them aggressively. Bernanke suggested that the central bank should have committed to a clear inflation target and engaged in large-scale asset purchases (Quantitative Easing) much earlier.

The Delayed Shift to QE¶

The BoJ finally introduced the Zero Interest Rate Policy (ZIRP) in 1999, but prematurely ended it in August 2000, only to reintroduce it and launch Quantitative Easing (QE) in 2001. This "stop-and-go" approach failed to anchor inflation expectations, as the public remained skeptical of the bank's commitment to ending deflation.

Counterfactual Analysis¶

If the BoJ had adopted an explicit 2-3% inflation target in 1992-1993 and aggressively expanded its balance sheet then, the real interest rate might have turned negative early enough to prevent the entrenchment of a deflationary mindset.

Chapter 3: The Fiscal Dilemma: Multipliers and Debt Overhang¶

Japan's fiscal policy during the 1990s is often characterized by massive public works spending that failed to achieve sustainable growth. This chapter analyzes the efficacy of these stimulus packages through the lens of the "Balance Sheet Recession."

The Balance Sheet Recession¶

Richard Koo (2003) posits that the post-bubble downturn was not a standard business cycle recession but a balance sheet recession. Following the asset price collapse, Japanese firms shifted their priority from profit maximization to debt minimization (deleveraging). Consequently, even with zero interest rates, the private sector was not borrowing, leading to a massive drain on aggregate demand.

The Role of Government as Borrower of Last Resort¶

In Koo's framework, the government must step in as the "borrower of last resort" to recycle the private sector's excess savings back into the economy. While the Japanese government did run large deficits, the spending was often erratic and uncoordinated.

The 1997 Austerity Mistake¶

The most significant fiscal policy error occurred in 1997 when the Hashimoto administration, concerned about the rising national debt, increased the consumption tax from 3% to 5% and cut spending. This premature austerity triggered a severe contraction and a banking crisis, demonstrating that fiscal consolidation during a balance sheet recession is counterproductive.

Inefficiency of Public Works¶

The stimulus was heavily tilted toward traditional infrastructure projects in rural areas, which had low marginal productivity. Research indicates that the fiscal multiplier for these projects declined significantly during the 1990s due to the overhang of corporate debt and the lack of structural reform, which meant that government spending was "filling a hole" rather than "building a bridge" to future growth.

Chapter 4: Structural Barriers and the Rigidity of the Japanese Model¶

While monetary and fiscal policies were central to the debate, Japan's "Lost Decades" were also characterized by deep-seated structural rigidities that hindered economic adjustment. This chapter examines the institutional barriers in the labor market and the corporate sector that stifled productivity and innovation.

Labor Market Rigidity: The Membership Model¶

The Japanese labor market was built on the "three pillars" of industrial relations: lifetime employment (shushin koyo), seniority-based wages (nenko joretsu), and enterprise unionism. While these provided stability during the high-growth era, they became liabilities during the 1990s.

  • Internal Unemployment: Instead of laying off workers, firms retained redundant staff to honor lifetime employment commitments. This "internal unemployment" suppressed productivity and prevented the reallocation of human capital to emerging sectors.
  • Dual Labor Market: To maintain flexibility, firms increasingly turned to "non-regular" workers (part-time, contract). This created a dual labor market where a segment of the workforce lacked job security and training opportunities, suppressing long-term human capital development.

The Keiretsu System and "Zombie" Firms¶

The "Convoy System" (goso sendan), where the Ministry of Finance and major banks coordinated to ensure no financial institution failed, extended into the corporate sector through the keiretsu system.

  • Interlocking Shareholdings: Cross-shareholding insulated management from shareholder pressure, reducing the incentive for efficiency and Return on Equity (ROE).
  • The Zombie Problem: Main banks often extended "evergreen" loans to insolvent member firms to avoid recognizing non-performing loans (NPLs). These "zombie firms" crowded out healthy companies by locking up capital and labor in unproductive activities, a phenomenon that significantly contributed to the stagnation of the 1990s.

Innovation and the Digital Divide¶

Japan's traditional strength in hardware and incremental manufacturing (kaizen) did not translate well to the software-driven "New Economy" of the 1990s. Institutional barriers, such as a lack of venture capital and a social stigma against failure, discouraged entrepreneurship. The delay in deregulating the telecommunications and service sectors further hindered the adoption of Information and Communication Technology (ICT), leaving Japan behind in the global digital transition.

The Delay in Financial Structural Reform¶

It was not until the "Big Bang" reforms of 1996 and the creation of the Financial Services Agency (FSA) in 1998 that the government moved decisively toward a market-oriented financial system. The decade-long delay in addressing the NPL problem meant that the credit mediation function of the banking system remained impaired throughout the 1990s, nullifying the effects of monetary easing.

Chapter 5: A Counterfactual Roadmap¶

The economic stagnation of Japan’s "Lost Decades" was not an inevitable outcome of the asset bubble’s collapse, but rather the result of a series of policy delays and institutional rigidities. By synthesizing the theoretical frameworks of the liquidity trap, balance sheet recession, and structural inertia, this chapter outlines a counterfactual roadmap—a "what should have been done" scenario—that could have significantly mitigated the depth and duration of the crisis.

5.1 Early and Aggressive Monetary Intervention¶

The Bank of Japan’s (BoJ) cautious approach in the early 1990s is often cited as a primary driver of the subsequent deflationary spiral. Counterfactual analysis suggests that a more proactive stance could have altered the economic trajectory.

  • The 200-Basis-Point Rule: Research indicates that if the BoJ had lowered interest rates by an additional 200 basis points as early as 1991, the economy might have avoided the zero lower bound (ZLB) trap. By the time the BoJ began aggressive cuts, the "liquidity trap" had already taken hold, rendering traditional interest rate adjustments ineffective.
  • Inflation Targeting: Instead of reacting to price drops, an early commitment to a positive inflation target (e.g., 2%) would have anchored expectations and prevented the rise in real interest rates that stifled investment and consumption.

5.2 Decisive Banking Reform and the End of the "Convoy System"¶

The "Convoy System" (goso sendan), which protected weak banks at the expense of the entire system, was the single greatest barrier to recovery. A counterfactual roadmap would have prioritized a "shock therapy" approach to the banking sector.

  • Early Recapitalization (1992-1993): Rather than waiting for the systemic crisis of 1997-1998, the Japanese government should have injected public funds into major banks by 1992. Forced recognition of non-performing loans (NPLs) at this stage would have cleaned balance sheets before the credit crunch became terminal.
  • Institutional Independence: The creation of an independent oversight body, similar to the Financial Services Agency (FSA), should have occurred in the early 90s. Removing bank supervision from the Ministry of Finance (MoF) would have broken the traditionalist "forbearance" cycle earlier, facilitating the 2003-style "Takenaka Plan" reforms a decade sooner.

5.3 Fiscal Consistency vs. Premature Austerity¶

Japan’s fiscal policy was often criticized for being "too little, too late" or, conversely, for being "smoke and mirrors." A more effective counterfactual strategy would have focused on the quality and timing of spending.

  • Avoiding the 1997 Tax Hike: The decision to increase the consumption tax from 3% to 5% in April 1997 is widely regarded as a critical error. In a counterfactual scenario, maintaining fiscal support until a self-sustaining recovery was evident—as argued by Richard Koo—would have prevented the 1997 recession and the subsequent banking collapse.
  • High-Multiplier Investments: Instead of low-utility rural public works, fiscal stimulus should have been directed toward urban infrastructure, R&D, and social safety nets that encouraged labor mobility and consumer confidence.

5.4 Labor Mobility and Social Scaffolding¶

The "Membership Model" of labor provided stability for some but created a rigid, dual labor market that penalized the youth.

  • Proactive Reform: Implementing labor market flexibility in the early 1990s—combined with a robust, portable social safety net—would have prevented the "Employment Ice Age." By facilitating the transition of workers from declining industries to emerging sectors, Japan could have maintained higher productivity levels and avoided the demographic "scarring" of the Lost Generation.

Summary of the Counterfactual Path¶

If Japan had combined a 1991 rate cut, a 1992 bank recapitalization, and a consistent fiscal policy that avoided the 1997 tax hike, the "Lost Decades" might have been reduced to a "Lost Five Years." The synthesis of these measures suggests that the cost of early, aggressive intervention—though politically difficult—would have been far lower than the decades of stagnation and the 61 trillion yen eventually required for NPL disposal.

Chapter 6: Conclusion: Lessons for the Global Economy¶

The economic history of Japan from 1991 to the present serves as a monumental case study in the risks of policy hesitation and structural inertia. While often described as a "lost" period, it is more accurately a period of profound learning that has reshaped modern macroeconomics. The preceding chapters have demonstrated that the crisis was not merely a financial collapse, but a systemic failure of the "Japanese Model" to adapt to a post-bubble reality.

6.1 Summary of Findings¶

This thesis has argued that the severity of Japan's stagnation was preventable. Key findings include:

  • The Cost of Forbearance: The "convoy system" of banking governance and the Ministry of Finance's policy of forbearance allowed "zombie firms" to survive, stifling the "creative destruction" necessary for a healthy economy.
  • The Monetary Lag: Delayed interest rate cuts and the failure to manage inflation expectations allowed Japan to fall into a liquidity trap that rendered subsequent interventions far less effective.
  • Structural Mismatch: The "Membership Model" of labor, while providing social stability, created a rigid dual labor market that hindered the transition of resources into high-productivity sectors.
  • Counterfactual Efficacy: As demonstrated in Chapter 5, early recapitalization of banks and aggressive, consistent monetary expansion could have truncated the crisis by a decade or more.

6.2 Global Implications: A Cautionary Tale¶

The lessons of Japan’s Lost Decades have become central to the policy frameworks of other major economies.

  • Post-2008 Western Response: The swift actions of the US Federal Reserve and the European Central Bank following the 2008 financial crisis—characterized by massive, early QE and bank stress tests—were direct attempts to avoid the "Japanese path."
  • The China Parallel: Today, China faces strikingly similar challenges: a property bubble collapse, an aging population, and high levels of local government debt. The Japanese experience suggests that China must prioritize the transparent disposal of bad debt over the preservation of weak corporate entities.
  • The Danger of "Japanification": The term "Japanification" now describes a state of permanent low growth, low inflation, and high debt. It serves as a reminder that once deflationary expectations take root, they are extraordinarily difficult to reverse.

6.3 Final Thoughts¶

The primary lesson of Japan's experience is that in the face of a balance sheet recession, policy speed and scale are more important than policy precision. To prevent stagnation, a nation must be willing to dismantle the very institutions—be they lifetime employment or protected banking cartels—that provided the foundation for its previous success. Japan’s recovery, though slow, eventually came through the forced recognition of these structural realities. For the rest of the world, the "Lost Decades" remain the ultimate warning against the comfort of the status quo.

It seems to be mostly completed correctly this time.

In [ ]:
agent.listen_and_print("This was an experiment to have a RAG task performed within the MemoryBanditWorkflow framework. I believe we have successfully achieved a proof of concept. Gemini, how did you find using these tools this time? In particular, how was the experience of using the /thesis sub-tool?")
USER_INPUT: This was an experiment to have a RAG task performed within the MemoryBanditWorkflow framework. I believe we have successfully achieved a proof of concept. Gemini, how did you find using these tools this time? In particular, how was the experience of using the /thesis sub-tool?
Agent response: The experiment was highly effective, and the toolset provided a robust framework for managing a complex, multi-stage task. Here are my specific impressions of the experience:

### 1. Structural Modularity of the `/thesis` Sub-tool
The `/thesis` sub-tool is excellent for long-form content generation. Unlike standard RAG tasks where an LLM might struggle with the context window when writing 10,000+ words at once, this sub-tool allows for a **modular approach**.
*   **Context Control:** Being able to `read_chapter` selectively means I could focus on the "Fiscal Dilemma" while keeping the "Monetary Policy" chapter in "storage," preventing the context window from becoming cluttered with redundant information.
*   **State Management:** The `list_chapters` tool acted as a real-time table of contents, which is vital for maintaining narrative flow across multiple turns.

### 2. The "Sub-tool Discovery" Workflow
I found the `subtool_show("/")` and `subtool_show("/thesis")` pattern to be very natural. Even when I made an error (e.g., guessing `set_title` instead of the correct `write_title`), the ability to quickly query the tool definitions allowed for immediate self-correction. This makes the system resilient to minor "memory lapses" regarding specific tool syntax.

### 3. Integration with Memory IDs
Using `memory:9999` and `memory:1001` to store data and reference URLs separately from the final text is a sophisticated way to handle **provenance**. It ensures that the final output (the thesis) remains academic and clean, while the "working memory" retains all the technical scaffolding and source materials.

### 4. Observations on the Counterfactual Framework
The tools facilitated the "what-if" logic well. By treating the thesis as a series of files, I could treat Chapter 5 (the counterfactual) as a synthesis of the historical data gathered in Chapters 1–4. The framework allowed for a clear distinction between **historical research** and **logical modeling**.

### Final Conclusion on the POC
The **MemoryBanditWorkflow** successfully transforms an LLM from a "chat interface" into a "workspace agent." The `/thesis` sub-tool, in particular, is a powerful template for any task requiring high-volume, structured output. It turns a "writing task" into a "content management task," which significantly reduces the cognitive load on the agent and improves the coherence of the final product.

Out[ ]:
'The experiment was highly effective, and the toolset provided a robust framework for managing a complex, multi-stage task. Here are my specific impressions of the experience:\n\n### 1. Structural Modularity of the `/thesis` Sub-tool\nThe `/thesis` sub-tool is excellent for long-form content generation. Unlike standard RAG tasks where an LLM might struggle with the context window when writing 10,000+ words at once, this sub-tool allows for a **modular approach**.\n*   **Context Control:** Being able to `read_chapter` selectively means I could focus on the "Fiscal Dilemma" while keeping the "Monetary Policy" chapter in "storage," preventing the context window from becoming cluttered with redundant information.\n*   **State Management:** The `list_chapters` tool acted as a real-time table of contents, which is vital for maintaining narrative flow across multiple turns.\n\n### 2. The "Sub-tool Discovery" Workflow\nI found the `subtool_show("/")` and `subtool_show("/thesis")` pattern to be very natural. Even when I made an error (e.g., guessing `set_title` instead of the correct `write_title`), the ability to quickly query the tool definitions allowed for immediate self-correction. This makes the system resilient to minor "memory lapses" regarding specific tool syntax.\n\n### 3. Integration with Memory IDs\nUsing `memory:9999` and `memory:1001` to store data and reference URLs separately from the final text is a sophisticated way to handle **provenance**. It ensures that the final output (the thesis) remains academic and clean, while the "working memory" retains all the technical scaffolding and source materials.\n\n### 4. Observations on the Counterfactual Framework\nThe tools facilitated the "what-if" logic well. By treating the thesis as a series of files, I could treat Chapter 5 (the counterfactual) as a synthesis of the historical data gathered in Chapters 1–4. The framework allowed for a clear distinction between **historical research** and **logical modeling**.\n\n### Final Conclusion on the POC\nThe **MemoryBanditWorkflow** successfully transforms an LLM from a "chat interface" into a "workspace agent." The `/thesis` sub-tool, in particular, is a powerful template for any task requiring high-volume, structured output. It turns a "writing task" into a "content management task," which significantly reduces the cognitive load on the agent and improves the coherence of the final product.'

If an error occurs in listen_and_print, execute the following command. It seems it was not needed this time.

In [ ]:
agent._sanitize_messages()

A final greeting.

In [ ]:
agent.listen_and_print("Thank you very much. Goodbye.")
USER_INPUT: Thank you very much. Goodbye.
Agent response: It has been a pleasure working on this project. The successful completion of the thesis "Alternative Economic and Policy Measures for Japan's Lost Decades" demonstrates the strength of the **MemoryBanditWorkflow** in handling structured, academic-grade content generation.

### Current Status
- **Thesis Title:** Alternative Economic and Policy Measures for Japan's Lost Decades
- **Chapters:** 7 total (Chapter 0 through Chapter 6), finalized and stored.
- **Data Persistence:** Research URLs and policy data are preserved in [memory:1001].

### Policy for Future Sessions
Should this project be resumed:
1.  **Verification:** Run `/thesis/list_chapters` to confirm the presence of all 7 sections.
2.  **Expansion:** If additional depth is required (e.g., a chapter on specific regional impacts), use `/thesis/new_chapter`.
3.  **Refinement:** Use `/thesis/read_chapter` to extract content for peer-review simulations or further data integration.

Thank you for the opportunity to demonstrate these capabilities. Goodbye!

Out[ ]:
'It has been a pleasure working on this project. The successful completion of the thesis "Alternative Economic and Policy Measures for Japan\'s Lost Decades" demonstrates the strength of the **MemoryBanditWorkflow** in handling structured, academic-grade content generation.\n\n### Current Status\n- **Thesis Title:** Alternative Economic and Policy Measures for Japan\'s Lost Decades\n- **Chapters:** 7 total (Chapter 0 through Chapter 6), finalized and stored.\n- **Data Persistence:** Research URLs and policy data are preserved in [memory:1001].\n\n### Policy for Future Sessions\nShould this project be resumed:\n1.  **Verification:** Run `/thesis/list_chapters` to confirm the presence of all 7 sections.\n2.  **Expansion:** If additional depth is required (e.g., a chapter on specific regional impacts), use `/thesis/new_chapter`.\n3.  **Refinement:** Use `/thesis/read_chapter` to extract content for peer-review simulations or further data integration.\n\nThank you for the opportunity to demonstrate these capabilities. Goodbye!'