Skip to content

Conversation

@ifitsmanu
Copy link

No description provided.

@gemini-code-assist
Copy link

Summary of Changes

Hello @ifitsmanu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the trading agents system by integrating new external data sources and bolstering the robustness of its core components. It introduces support for X (Twitter) sentiment analysis, as well as news feeds from Bloomberg and Reuters, providing agents with a richer and more diverse information landscape. Concurrently, the PR refines the agent memory system for more accurate historical context retrieval and implements extensive input validation across the graph to prevent errors. Furthermore, the investment signal processing logic has been made more reliable through a shift to regex-based extraction, and the project's dependencies have been meticulously organized and version-pinned for improved stability.

Highlights

  • Expanded Data Integrations: Introduced new tools for fetching sentiment from X (Twitter), and news from Bloomberg and Reuters, significantly broadening the data sources available to the agents.
  • Enhanced Agent Memory: Improved the memory retrieval system to include similarity scores and a minimum similarity threshold, ensuring more relevant past recommendations are considered.
  • Robust Input Validation: Implemented comprehensive input validation across various agent nodes and graph components, including checks for state integrity, date formats, and company names, leading to more stable execution.
  • Refactored Dependency Management: Updated requirements.txt with precise version pinning and logical categorization of dependencies, improving project maintainability and reproducibility.
  • Reliable Signal Processing: Replaced LLM-based investment decision extraction with a regex-driven approach, making the signal processing more deterministic and less prone to LLM hallucinations.
  • Codebase Refinements: Cleaned up agent files by removing unused imports and redundant variables, and improved prompt readability for better agent performance and maintainability.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant enhancements by adding support for new data sources like X, Reuters, and Bloomberg, and refactoring various parts of the codebase for improved robustness and readability. The changes include better input validation, safer data access patterns, and more reliable logic for signal processing and agent interactions. However, there are a few critical issues that need attention, primarily related to missing dependencies in requirements.txt and a hardcoded file path in the configuration, which will prevent the application from running correctly in different environments. Additionally, there are some areas where code quality can be further improved by addressing error handling, code duplication, and removing unused code.

questionary
langchain_anthropic
langchain-google-genai
# Essential dependencies with compatible versions

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This file is missing critical dependencies required for the application to function correctly. The praw library, which is necessary for Reddit integration, has been removed. Additionally, reuterspy is used as a fallback in tradingagents/dataflows/reuters_utils.py but is not listed as a dependency. These omissions will lead to ImportError exceptions at runtime.

Please add the missing dependencies to this file. For example:

praw>=7.0.0
reuterspy>=0.1.6

os.path.join(os.path.dirname(__file__), ".")
),
"results_dir": os.getenv("TRADINGAGENTS_RESULTS_DIR", "./results"),
"data_dir": "/Users/yluo/Documents/Code/ScAI/FR1-data",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The data_dir is hardcoded to an absolute path on a specific user's machine (/Users/yluo/...). This will prevent the code from running on any other machine. This should be a relative path or configured via an environment variable. Please also add the new environment variable to .env.example.

Suggested change
"data_dir": "/Users/yluo/Documents/Code/ScAI/FR1-data",
"data_dir": os.getenv("TRADINGAGENTS_DATA_DIR", "./data"),

Comment on lines +46 to +47
except Exception:
continue

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Catching a generic Exception and then using continue silently swallows any potential errors during the API request or JSON parsing. This makes debugging very difficult. It's better to catch more specific exceptions (e.g., requests.exceptions.RequestException, ValueError for JSON errors) and log the error for visibility.

Suggested change
except Exception:
continue
except (requests.exceptions.RequestException, ValueError) as e:
print(f"Warning: Could not process Bloomberg news for term '{term}': {e}")
continue

Comment on lines +53 to +54
except Exception:
continue

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Catching a generic Exception and then using continue silently swallows any potential errors. This can hide bugs and make debugging difficult. Please catch more specific exceptions (e.g., requests.exceptions.RequestException, ValueError) and log the error.

Suggested change
except Exception:
continue
except (requests.exceptions.RequestException, ValueError) as e:
print(f"Warning: Could not process Reuters news for query '{query}': {e}")
continue

from langchain_core.messages import AIMessage
import time
import json
import functools

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The functools module is imported but not used in this file. Please remove it to keep the code clean.

from langchain_core.messages import AIMessage
import time
import json
import functools

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The functools module is imported but not used in this file. Please remove it to improve code clarity.

Comment on lines +57 to +72
def get_company_name(ticker: str) -> str:
"""Map ticker to company name for better search results"""
ticker_mapping = {
"AAPL": "Apple Inc",
"MSFT": "Microsoft Corporation",
"GOOGL": "Alphabet Google",
"AMZN": "Amazon.com Inc",
"TSLA": "Tesla Inc",
"NVDA": "NVIDIA Corporation",
"META": "Meta Facebook",
"JPM": "JPMorgan Chase",
"JNJ": "Johnson & Johnson",
"V": "Visa Inc",
"TSM": "Taiwan Semiconductor"
}
return ticker_mapping.get(ticker, ticker) No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function get_company_name and its hardcoded ticker_mapping are duplicated in tradingagents/dataflows/reuters_utils.py. This duplicated code is hard to maintain. Consider moving this function to a shared utility file (e.g., tradingagents/dataflows/utils.py) to avoid duplication.

Also, this hardcoded map is not scalable. For a more robust solution, consider loading this mapping from a configuration file (e.g., a JSON or YAML file).

import re
import numpy as np
from typing import Annotated
from datetime import datetime, timedelta

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The timedelta class from the datetime module is imported but not used. Please remove it.

Suggested change
from datetime import datetime, timedelta
from datetime import datetime

Comment on lines +47 to +49
total_sentiment = 0
weighted_sentiment = 0
total_weight = 0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variables total_sentiment, weighted_sentiment, and total_weight are initialized here and updated in the loop on lines 70-72, but they are never used. Please remove these initializations and their updates in the loop to improve code clarity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant