Release notes and deeper dives. More posts will show up here over time.
April 23, 2026 · Release
BananaCode v2.4.0: DeepReview & Enhanced Personalization
BananaCode v2.4.0 is here, focusing on giving users more control over how the AI interacts and introducing a powerful new audit mode.
🔍 DeepReview: Full Codebase Audit
The new /deepreview command switches BananaCode into a specialized review mode. You can choose between a Full Review (auditing the entire current codebase) or a Diff Review (reviewing only staged/unstaged changes via git diff). In this mode, BananaCode focuses purely on providing a structured report with Critical, Warning, and Suggestion findings, without making any file modifications.
✨ Emoji & Style Personalization
We've added more ways to customize your AI pair programmer's personality:
- Emoji Modes (
/emoji): Choose between Normal, Minimal, or More emojis. Whether you want a strictly professional vibe or a lively, expressive output, the choice is yours.
- Concise Style: Added to the
/style command, the new Concise mode provides terse, code-first responses, skipping long preambles and summaries.
🛠️ New Tools & UI Polish
- Rename File Tool: The new
rename_file tool allows the agent to move or rename files and directories safely with your permission.
- Dynamic Spinners: We've replaced the static "Thinking..." text with randomized, provider-specific verbs like "Clauding...", "Gemming...", and "Ollaming...".
- Transparent Guard: Banana Guard now explicitly states the reason behind its auto-approval decisions in the terminal.
Bug Fixes & Reliability
We've improved tool execution error handling to better manage user cancellations and repair dangling tool calls. Additionally, the startup telemetry now correctly uses https for more secure connections.
Update now with npm install -g @banaxi/banana-code and try out the new /deepreview command!
April 2026 · New Feature
Local Intelligence: LM Studio Support is Here!
Banana Code has always been about flexibility, and today we're taking a huge leap towards local-first development. We are excited to announce full, first-class support for LM Studio.
Why LM Studio?
LM Studio has become the go-to tool for running large language models (LLMs) locally on your own hardware. By integrating LM Studio, Banana Code users can now leverage powerful models like Llama 3, Mistral, and many others without needing an API key or an active internet connection for the model inference.
First-Class Features
This isn't just a simple proxy; we've implemented a full provider suite tailored for the local experience:
- Automatic Model Discovery: Banana Code can now talk to LM Studio to see which models you have currently loaded. No more manual typing of long model identifiers.
- Full Tool Calling: Local models can now use the entire Banana Code tool suite. Whether it's reading files, running terminal commands, or searching the web, your local model is now an agent.
- Streaming Responses: Get immediate feedback with real-time token streaming, just like with cloud providers.
- OpenAI-Compatible: Leverages the standardized local server API provided by LM Studio for maximum compatibility.
Getting Started
Switching to LM Studio is simple. Just run the following command in your terminal:
/provider lmstudio
Banana Code will ask for your local server URL (defaulting to http://localhost:1234/v1) and then let you pick from your loaded models. You can also configure it during initial setup with banana --setup.
Optimized for Performance
We've included automatic JSON schema sanitization for local models, ensuring that even strict local inference engines can understand and use Banana Code's tool definitions without errors.
Download Banana Code using npm install -g @banaxi/banana-code and then download LM Studio at lmstudio.ai and start coding locally today!
April 2026 · Release
2.0.0 Released, What changed?
Version 2.0.0 is a major step forward for Banana Code as a terminal-native AI pair programmer. Here is a concise tour of what shipped, aligned with the actual app behavior.
Smarter Auto Mode (model + effort)
When you pick Auto Mode as your model, a small router model still picks the best concrete model for each user turn—but for Claude, it now also selects a reasoning effort level (low through max, including xhigh where supported). That keeps simple questions cheap and fast while reserving depth for hard tasks. Use /effort to adjust effort manually when you are on Claude.
Interactive terminal suite
Banana Code moves beyond one-shot shell runs. New tools drive a persistent PTY:
execute_command_in_terminal — start an interactive command (e.g. npm init, wizards).
send_to_terminal — send stdin (remember \n for Enter) for Y/N, prompts, or editors.
terminate_terminal_session — clean up when the session is done.
Together, these let the agent work through flows that used to stall on non-interactive runners—while one-off tasks still use execute_command.
Financial intelligence
For providers that expose usage (notably Anthropic), the app tracks real session spend and estimates what you saved with Prompt Caching. Run /context for a breakdown (messages, estimated tokens by category, cost, cache savings). On exit, you get a final session cost summary when costing is available.
Skill Creator mode
New command /skill-creator switches the assistant into a mode that helps you author Agent Skills: structured SKILL.md files with YAML frontmatter, written under ~/.config/banana-code/skills/<skill-name>/. The status bar shows SKILL CREATOR MODE; return with /agent.
New slash commands and style
/style — Normal, Explanatory, or Formal writing tone.
/effort — Claude reasoning effort (provider-specific tiers).
- Documentation table and help output also list
/skill-creator alongside existing plan/ask/security flows.
Built-in docs for the model
The get_banana_docs tool gives the model a reliable summary of Banana Code (plus README when present), so answers about slash commands and setup stay accurate.
UltraMemory (optional)
Enable UltraMemory under /settings to run background summarization of eligible chats into global memory. It can significantly increase API usage; the CLI asks for confirmation before turning it on, and only processes activity after you enable it.
Richer @-mentions
File mentions support quoted paths (spaces), ~ expansion, and attaching images via @@path for multimodal providers.
Headless API security
banana --api now uses a generated API token stored at ~/.config/banana-code/token.json. HTTP requests need Authorization: Bearer <token> or ?token=; WebSockets should connect with ?token=... unless you explicitly use --no-auth (discouraged).
Other polish
- Claude: Opus 4.7 in the roster, prompt-cache-aware costing, extended streaming/thinking behavior where the API supports it.
- Sessions: More reliable save paths on exit, Ctrl+C, and errors; terminal sessions are cleaned up on shutdown.
- Startup: Refreshed ASCII banner and messaging.
Install or upgrade with npm install -g @banaxi/banana-code and read the full docs on the Docs page.