Skip to content

fix(editor): Resolve nodes stuck on loading after execution in instance-ai preview#28450

Open
mutdmour wants to merge 47 commits intomasterfrom
feature/instance-ai-exec-preview
Open

fix(editor): Resolve nodes stuck on loading after execution in instance-ai preview#28450
mutdmour wants to merge 47 commits intomasterfrom
feature/instance-ai-exec-preview

Conversation

@mutdmour
Copy link
Copy Markdown
Contributor

@mutdmour mutdmour commented Apr 14, 2026

Summary

Fixes nodes staying stuck in loading/waiting state after workflow execution completes in the instance-ai workflow preview panel.

Three root causes identified and fixed:

  1. Execution polling for fire-and-forget tool — The execute_workflow MCP tool returns immediately before the execution completes. The preview iframe fetches execution data via API, but when nodes like Wait are involved, the execution may still be running at fetch time. Added polling in InstanceAiWorkflowPreview that detects incomplete executions and reloads the iframe when they finish.

  2. Relay desync on Wait node resume — When a Wait node resumes, the server sends a second executionStarted with the same execution ID. useExecutionPushEvents was creating a fresh eventLog, which desynced the relay cursor. Now appends to the existing log when the same execution ID is seen again.

How to test

  1. Open Instance AI, ask it to build and run a workflow with a Wait node
  2. Approve the execution when prompted
  3. After the agent reports success, verify all three nodes show green success checkmarks
  4. No node should remain in loading or waiting state

Review / Merge checklist

  • PR title and summary are descriptive.
  • Docs updated or follow-up ticket created.
  • Tests included.
  • PR Labeled with release/backport (if needed)
  • I have seen this code, I have run this code, and I take responsibility for this code.

mutdmour and others added 30 commits April 10, 2026 14:09
Remove type icons from tab labels, make tabs fill full header height,
and replace loading spinners with larger loader-circle icon (80px).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…g (no-changelog)

Adds a Playwright e2e test that captures the bug where the last node in
the instance AI workflow preview stays in "running" state (spinning
border) after execution completes. The test sends a specific prompt to
build and execute a 3-node workflow, then asserts that no canvas nodes
remain with the .running CSS class.

Includes InstanceAiPage page object, navigation helper, and test
fixtures with N8N_ENABLED_MODULES=instance-ai.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ionFinished (no-changelog)

The event relay watcher only forwarded the last event in the log, so when
Vue coalesced multiple ref updates into one callback, intermediate events
(e.g. nodeExecuteAfter for the last node) were silently dropped. This left
the iframe's executing-node queue with a stale entry, keeping the last node
in spinning/running state after the workflow finished.

- Track relayed event count so every new event is forwarded, even when the
  watcher fires once for multiple log additions.
- Keep the eventLog intact when executionFinished arrives (instead of
  clearing it immediately) so the relay can forward pending events before
  sending the synthetic executionFinished.
- Add clearEventLog() to useExecutionPushEvents, called by the relay after
  all pending events have been forwarded.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add 16 Playwright e2e tests across 6 spec files covering instance AI
workflow preview, artifacts, timeline, sidebar, confirmations, and
chat basics. Wire up proxy-aware fetch in the AI SDK model creation
so MockServer can intercept Anthropic API calls for recording/replay.

- Expand InstanceAiPage page object with 30+ locators
- Add InstanceAiSidebar component page object
- Add data-test-id to preview close button
- Add getProxyFetch() to model-factory.ts and instance-ai.service.ts
  so @ai-sdk/anthropic respects HTTP_PROXY in e2e containers
- Rewrite fixtures with proxy service recording support
- Replace single execution-state test with comprehensive suite

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…hangelog)

Add two-tier trace replay system that records tool I/O during e2e test
recording and replays with bidirectional ID remapping in CI. This enables
deterministic replay of complex multi-step agent tests where tool execution
produces dynamic IDs.

- New trace-replay.ts: IdRemapper (ID-field-aware), TraceIndex (per-role
  cursors), TraceWriter, JSONL I/O helpers, PURE_REPLAY_TOOLS set
- Modified langsmith-tracing.ts: replayWrapTool (Tier 1: real execution +
  ID remap), pureReplayWrapTool (Tier 2: pure replay for external deps),
  recordWrapTool, createTraceReplayOnlyContext stub for non-LangSmith envs
- New test-only controller endpoints: POST/GET/DELETE /test/tool-trace
  with slug-scoped storage for parallel test isolation
- Updated fixture: records trace.jsonl during recording, loads for replay,
  slug-scoped activate/retrieve/clear lifecycle
- 23 unit tests for IdRemapper and TraceIndex
- Recorded trace.jsonl files for all 15 instance AI test expectations

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…log)

Add subString body matching on the system prompt to disambiguate LLM call
types (title generation vs orchestrator vs sub-agent) during proxy replay.
Without this, sequential expectations could be served to the wrong call
when the call order differs between recording and replay.

Re-record all expectations with the body matcher and remove debug logging
from trace replay wrappers.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When re-recording with a real API key, always use record mode (never
load old trace events into the backend). Previously, existing trace
files would cause the backend to enter replay mode during re-recording,
resulting in trace.jsonl files with only a header and no tool calls.

Re-record all trace.jsonl files with proper tool call events.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Re-record all proxy expectations after fixing the recording mode logic.
Expectations now have subString body matchers on the system prompt and
trace.jsonl files have proper tool call events.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The proxy's sequential mode sets the last expectation as unlimited (fallback
for extra agent turns). Previously this applied to the last file alphabetically
which could be a community_nodes GET. Now it finds the last /v1/messages
POST expectation specifically.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…elog)

Background task completion triggers `startInternalFollowUpRun`, which
creates a new trace context. Previously each context got a fresh
TraceIndex with cursor at 0, so the follow-up run's first tool call
(e.g. list-workflows) would mismatch the first trace event
(build-workflow-with-agent) and throw.

Fix: store a shared TraceIndex/IdRemapper per test slug on the service.
All runs within the same slug reuse the same instances, preserving
cursor state across the initial run and any follow-up runs.

This fixes the two confirmation e2e tests that rely on suspend/resume.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…(no-changelog)

waitForAssistantResponse only waited for the first message element to appear
(streaming start), not for the agent to finish. Sidebar operations then raced
against the still-running agent. New waitForResponseComplete waits for the send
button to reappear, which only renders when isStreaming becomes false.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…s between tests (no-changelog)

Two preview tests failed because their recorded proxy expectations contained
stale LLM responses from previous tests' background task follow-ups. The
fixture now cancels leftover background tasks before each test via a new
test-only endpoint, preventing future cross-test contamination.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ET (no-changelog)

MockServer proxy connections intermittently reset when 4 parallel workers
load expectations simultaneously. Add withRetry helper with exponential
backoff (3 retries, 500ms base) and re-throw on failure instead of
silently swallowing the error.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ation (no-changelog)

Positional selectors (.last()) break when parallel tests create threads in
shared containers. Switch to getThreadByTitle() with LLM-generated titles
from recordings. Also handle missing expectations directories gracefully.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ure (no-changelog)

Covers the record/replay architecture, ID remapping problem and solution,
two-tier tool wrapping strategy, trace format, and troubleshooting guide.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ant proxy expectations (no-changelog)

- Add unit tests for TraceWriter, parseTraceJsonl, model-factory proxy fetch,
  clearEventLog, and useEventRelay coalesced event handling
- Extract test-only trace replay endpoints into InstanceAiTestController,
  conditionally registered when N8N_INSTANCE_AI_TRACE_REPLAY is set
- Extract trace replay state from InstanceAiService into TraceReplayState class
- Remove 83 irrelevant api-staging community nodes expectation files
- Fix stale test that expected eventLog cleared on executionFinished

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…tance AI e2e tests (no-changelog)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…laims (no-changelog)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rename restricted `err` identifier to `error` and mark withRetry
callbacks async to comply with @typescript-eslint/promise-function-async.
Use nullish coalescing for proxy env vars, move the undici type
annotation to a top-level import type, and mark the returned fetch
wrapper async so its Promise return type is explicit.
…hangelog)

Iterate all workflow executions so background workflows have their
buffered event logs cleaned up when they finish, preventing stale
replay if the user later switches tabs. Track the last-seen
executionId per workflow to detect when useExecutionPushEvents issues
a fresh eventLog on a re-execution and reset the relay cursor, which
otherwise would skip the new run's events.
…elog)

Replace import() type annotation with top-level import, quote 'string'
property keys to avoid id-denylist, and move the e2e tests doc under the
instance-ai package docs folder.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…gelog)

clearDir was skipped when recordedExpectations was empty, leaving stale
files that subsequent replays would consume as outdated data.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…-tabs

# Conflicts:
#	packages/cli/src/modules/instance-ai/instance-ai.service.ts
mutdmour and others added 7 commits April 13, 2026 15:05
Replace the implicit `jsonParse<TraceEvent>` cast in `parseTraceJsonl`
with a real type guard. Each line must be an object with a known `kind`
discriminator (`header`, `tool-call`, `tool-suspend`, `tool-resume`);
otherwise the parse throws with the offending line number so a malformed
expectation file fails loudly instead of corrupting downstream replay.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…iew test

Adds two layered Playwright runners and a failing test that exercises them:

- `pnpm test:local:isolated` — generic local runner with a random port,
  throwaway `N8N_USER_FOLDER` under the OS temp dir, full `@capability:*`
  inclusion, and process-group cleanup. Extracted so other modules can reuse
  it via `N8N_TEST_ENV`.
- `pnpm test:local:instance-ai` — thin wrapper that pre-fills the four
  instance-ai env vars (`N8N_ENABLED_MODULES`, model, key,
  local-gateway-disabled) over the generic runner.
- New `should mark all nodes as success after execution completes` test in
  the workflow preview spec — currently failing because terminal nodes after
  a Wait stay in the running/waiting state instead of flipping to success.
- `instanceAiProxySetup` fixture now no-ops when there's no `n8nContainer`,
  so the local runner can hit the real Anthropic API without a proxy stack.
- README.md additions cover both runners, env-var levers
  (`PLAYWRIGHT_ALLOW_CONTAINER_ONLY`, `PLAYWRIGHT_SKIP_WEBSERVER`), and the
  instance-ai test workflow end-to-end.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds a third subsection under "Running Tests" in e2e-tests.md covering
the new local-build mode (no docker, real Anthropic key) via
`pnpm test:local:instance-ai`, alongside a "when to use which mode" guide.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ce-ai preview

Three root causes fixed:

1. Execution polling for fire-and-forget tool — the execute_workflow tool
   returns immediately before completion. The preview fetches execution data
   via API but it may still be running (Wait node pending). Added polling in
   InstanceAiWorkflowPreview to reload the iframe when execution finishes.

2. Relay desync on Wait resume — when a Wait node resumes, the server sends
   a second executionStarted with the same execution ID. The event handler
   was resetting the eventLog, desyncing the relay cursor. Now appends to
   the existing log when the same execution ID is seen.

3. Wait node permanently showing 'waiting' — the Wait node's executionStatus
   in run data stays 'waiting' even after the overall execution succeeds.
   Now promoted to 'success' when the execution completed successfully.

Also fixes pre-existing stylelint error (invalid CSS variable name).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@n8n-assistant n8n-assistant bot added core Enhancement outside /nodes-base and /editor-ui n8n team Authored by the n8n team labels Apr 14, 2026
…-exec-preview

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@codecov
Copy link
Copy Markdown

codecov bot commented Apr 14, 2026

Bundle Report

Changes will increase total bundle size by 44.8kB (0.1%) ⬆️. This is within the configured threshold ✅

Detailed changes
Bundle name Size Change
editor-ui-esm 45.76MB 44.8kB (0.1%) ⬆️

Affected Assets, Files, and Routes:

view changes for bundle: editor-ui-esm

Assets Changed:

Asset Name Size Change Total Size Change (%)
assets/worker-*.js 4.65kB 3.17MB 0.15%
assets/constants-*.js 286 bytes 3.14MB 0.01%
assets/src-*.js 1.1kB 2.43MB 0.05%
assets/index-*.js 440 bytes 1.31MB 0.03%
assets/users.store-*.js 3.21kB 1.05MB 0.31%
assets/core-*.js 4.23kB 627.53kB 0.68%
assets/InstanceAiView-*.js 3.15kB 346.99kB 0.92%
assets/WorkflowsView-*.js 1.64kB 201.71kB 0.82%
assets/usePostMessageHandler-*.js 26 bytes 137.05kB 0.02%
assets/useRootStore-*.js 4.66kB 131.25kB 3.68%
assets/WorkflowLayout-*.js 458 bytes 127.84kB 0.36%
assets/router-*.js 313 bytes 119.31kB 0.26%
assets/SettingsSso-*.js 401 bytes 106.37kB 0.38%
assets/settings.store-*.js 25 bytes 80.06kB 0.03%
assets/CanvasRunWorkflowButton-*.js 635 bytes 78.28kB 0.82%
assets/CreditWarningBanner-*.js -38 bytes 55.17kB -0.07%
assets/AppSidebar-*.js 11.11kB 43.25kB 34.59% ⚠️
assets/SettingsSso-*.css -59 bytes 34.7kB -0.17%
assets/AgentEditorModal-*.js 26 bytes 34.58kB 0.08%
assets/usePushConnection-*.js 231 bytes 31.37kB 0.74%
assets/SettingsSecretsProviders.ee-*.js 13 bytes 28.73kB 0.05%
assets/ProjectHeader-*.js 1.65kB 27.43kB 6.4% ⚠️
assets/readyToRun.store-*.js 13 bytes 22.82kB 0.06%
assets/SettingsUsageAndPlan-*.js 13 bytes 19.1kB 0.07%
assets/DataTableActions-*.js 422 bytes 18.28kB 2.36%
assets/AppSidebar-*.css 3.67kB 17.63kB 26.31% ⚠️
assets/SettingsCommunityNodesView-*.js 13 bytes 16.47kB 0.08%
assets/DataTableDetailsView-*.js 67 bytes 14.57kB 0.46%
assets/dataTable.store-*.js 96 bytes 9.49kB 1.02%
assets/folders.store-*.js 89 bytes 8.99kB 1.0%
assets/TagsDropdown-*.js 13 bytes 8.27kB 0.16%
assets/WorkflowPreview-*.js 248 bytes 7.96kB 3.21%
assets/ExternalSecretsProviderConnectionSwitch.ee-*.js 13 bytes 7.61kB 0.17%
assets/ProjectHeader-*.css 394 bytes 6.86kB 6.1% ⚠️
assets/useWorkflowActivate-*.js 406 bytes 6.26kB 6.94% ⚠️
assets/sso.store-*.js 162 bytes 5.96kB 2.8%
assets/ContactAdministratorToInstall-*.js 159 bytes 5.92kB 2.76%
assets/AuthView-*.js 13 bytes 4.76kB 0.27%
assets/useFreeAiCredits-*.js 13 bytes 2.19kB 0.6%
assets/SamlOnboarding-*.js 13 bytes 2.18kB 0.6%
assets/useActivationError-*.js (New) 826 bytes 826 bytes 100.0% 🚀

Files in assets/InstanceAiView-*.js:

  • ./src/features/ai/instanceAi/useEventRelay.ts → Total Size: 1.78kB

  • ./src/features/ai/instanceAi/useExecutionPushEvents.ts → Total Size: 2.83kB

  • ./src/features/ai/instanceAi/components/InstanceAiWorkflowPreview.vue → Total Size: 394 bytes

Files in assets/WorkflowPreview-*.js:

  • ./src/app/components/WorkflowPreview.vue → Total Size: 331 bytes

The test was lost during merge conflict resolution (master's version was
taken for the spec file). Adds back the test that verifies all nodes show
success after a workflow with a Wait node completes execution.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 14, 2026

Performance Comparison

Comparing currentlatest master14-day baseline

Idle baseline with Instance AI module loaded

Metric Current Latest Master Baseline (avg) vs Master vs Baseline Status
instance-ai-rss-baseline 383.93 MB 388.20 MB 372.63 MB (σ 22.95) -1.1% +3.0%
instance-ai-heap-used-baseline 186.01 MB 186.52 MB 186.34 MB (σ 0.24) -0.3% -0.2% ⚠️

docker-stats

Metric Current Latest Master Baseline (avg) vs Master vs Baseline Status
docker-image-size-runners 393.00 MB 393.00 MB 391.63 MB (σ 11.06) +0.0% +0.3%
docker-image-size-n8n 1269.76 MB 1269.76 MB 1269.76 MB (σ 0.00) +0.0% +0.0%

Memory consumption baseline with starter plan resources

Metric Current Latest Master Baseline (avg) vs Master vs Baseline Status
memory-heap-used-baseline 114.17 MB 114.05 MB 113.86 MB (σ 0.84) +0.1% +0.3%
memory-rss-baseline 359.25 MB 287.98 MB 284.98 MB (σ 42.51) +24.7% +26.1% ⚠️
How to read this table
  • Current: This PR's value (or latest master if PR perf tests haven't run)
  • Latest Master: Most recent nightly master measurement
  • Baseline: Rolling 14-day average from master
  • vs Master: PR impact (current vs latest master)
  • vs Baseline: Drift from baseline (current vs rolling avg)
  • Status: ✅ within 1σ | ⚠️ 1-2σ | 🔴 >2σ regression

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 14, 2026

Codecov Report

❌ Patch coverage is 43.90244% with 23 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
...nstanceAi/components/InstanceAiWorkflowPreview.vue 32.35% 23 Missing ⚠️

📢 Thoughts on this report? Let us know!

mutdmour and others added 5 commits April 14, 2026 11:56
Regenerated recordings for all 4 workflow preview tests including the new
"should mark all nodes as success after execution completes" test.

Also excludes expectations/ from biome checks (recorded JSON fixtures
can exceed the 1MB file size limit).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…expectations

These external API calls to api-staging.n8n.io are not relevant to the
instance-ai test recordings and add unnecessary bulk.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…es proxy noise

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Wait node already has executionStatus='success' in the database after
the execution completes. Only the Set node (after the Wait) was missing
data because the execution hadn't finished when the iframe first fetched.
The polling fix handles that case — this promotion was not needed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When running locally (no Docker container), n8nContainer is null. The
fixture now short-circuits proxy setup in that case, matching the
behavior before the master merge overwrote it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@mutdmour mutdmour marked this pull request as ready for review April 14, 2026 14:10
mutdmour and others added 3 commits April 14, 2026 16:16
…re expectations

Poll indefinitely while the agent is streaming instead of a fixed 40-attempt
cap. Once streaming stops, allow a short grace window (~7.5s) before giving
up. Also revert expectation recordings to match master.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…havior

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core Enhancement outside /nodes-base and /editor-ui n8n team Authored by the n8n team

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant