-
Notifications
You must be signed in to change notification settings - Fork 51.8k
fix(Basic LLM Chain Node): Support ResponsesApi and OpenAI tools #22936
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
fix(Basic LLM Chain Node): Support ResponsesApi and OpenAI tools #22936
Conversation
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
|
E2E Tests: n8n tests passed after 9m 51.3s Run Details
Groups
This message was posted automatically by
currents.dev | Integration Settings
|
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3 issues found across 5 files
Prompt for AI agents (all 3 issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="packages/@n8n/nodes-langchain/nodes/chains/ChainLLM/methods/chainExecutor.ts">
<violation number="1" location="packages/@n8n/nodes-langchain/nodes/chains/ChainLLM/methods/chainExecutor.ts:122">
P2: Rule violated: **Prefer Typeguards over Type casting**
Use a type guard instead of `as Tool[]` for type narrowing. Consider creating a type guard function like `isToolArray(value): value is Tool[]` to safely verify the type before using it.</violation>
<violation number="2" location="packages/@n8n/nodes-langchain/nodes/chains/ChainLLM/methods/chainExecutor.ts:153">
P2: Rule violated: **Prefer Typeguards over Type casting**
Add an explicit return type annotation to `prepareLlm` function instead of casting its return value. The function at lines 128-135 should declare its return type as `BaseChatModel | BaseLanguageModel | Runnable<...>`.</violation>
</file>
<file name="packages/@n8n/nodes-langchain/nodes/chains/ChainLLM/methods/promptUtils.ts">
<violation number="1" location="packages/@n8n/nodes-langchain/nodes/chains/ChainLLM/methods/promptUtils.ts:181">
P1: Rule violated: **Prefer Typeguards over Type casting**
Use a type guard instead of `as AgentFinish` for type narrowing. The file already defines `isMessage()` type guard above - create a similar `isAgentFinish()` guard:
```typescript
const isAgentFinish = (value: unknown): value is AgentFinish => {
return typeof value === 'object' && value !== null && 'returnValues' in value;
};
Then use it: if (isAgentFinish(steps)) { ... }
</details>
<sub>Reply to cubic to teach it or ask questions. Re-run a review with `@cubic-dev-ai review this PR`</sub>
| fallbackLlm, | ||
| }: ChainExecutionParams): Promise<unknown[]> { | ||
| const version = context.getNode().typeVersion; | ||
| const model = prepareLlm(llm, fallbackLlm) as BaseChatModel | BaseLanguageModel; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: Rule violated: Prefer Typeguards over Type casting
Add an explicit return type annotation to prepareLlm function instead of casting its return value. The function at lines 128-135 should declare its return type as BaseChatModel | BaseLanguageModel | Runnable<...>.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/@n8n/nodes-langchain/nodes/chains/ChainLLM/methods/chainExecutor.ts, line 153:
<comment>Add an explicit return type annotation to `prepareLlm` function instead of casting its return value. The function at lines 128-135 should declare its return type as `BaseChatModel | BaseLanguageModel | Runnable<...>`.</comment>
<file context>
@@ -134,6 +149,8 @@ export async function executeChain({
fallbackLlm,
}: ChainExecutionParams): Promise<unknown[]> {
+ const version = context.getNode().typeVersion;
+ const model = prepareLlm(llm, fallbackLlm) as BaseChatModel | BaseLanguageModel;
// If no output parsers provided, use a simple chain with basic prompt template
if (!outputParser) {
</file context>
|
|
||
| // Some models nodes, like OpenAI, can define built-in tools in their metadata | ||
| function withBuiltInTools(llm: BaseChatModel | BaseLanguageModel) { | ||
| const modelTools = (llm.metadata?.tools as Tool[]) ?? []; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: Rule violated: Prefer Typeguards over Type casting
Use a type guard instead of as Tool[] for type narrowing. Consider creating a type guard function like isToolArray(value): value is Tool[] to safely verify the type before using it.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/@n8n/nodes-langchain/nodes/chains/ChainLLM/methods/chainExecutor.ts, line 122:
<comment>Use a type guard instead of `as Tool[]` for type narrowing. Consider creating a type guard function like `isToolArray(value): value is Tool[]` to safely verify the type before using it.</comment>
<file context>
@@ -122,6 +117,26 @@ async function executeSimpleChain({
+// Some models nodes, like OpenAI, can define built-in tools in their metadata
+function withBuiltInTools(llm: BaseChatModel | BaseLanguageModel) {
+ const modelTools = (llm.metadata?.tools as Tool[]) ?? [];
+ if (modelTools.length && isChatInstance(llm) && llm.bindTools) {
+ return llm.bindTools(modelTools);
</file context>
Summary
This PR updates ChainLLM to support responses API and built-in OpenAI tools. Because the change can be breaking, I added a new version.
Also, fallback agent was not used when output parser was enabled. This PR enables fallback agent for all versions
2025-12-09.10-30-42.mp4
Related Linear tickets, Github issues, and Community forum posts
https://linear.app/n8n/issue/NODE-4020/community-issue-structured-output-parser-always-produces-an-error-on
closes #22182
Review / Merge checklist
release/backport(if the PR is an urgent fix that needs to be backported)