Skip to content

Conversation

@donggyun112
Copy link


Add metadata parameter to Runner.run_async()

Add support for passing per-request metadata through the agent execution pipeline. This enables use cases like:

  • Passing user_id, trace_id, or session context to callbacks
  • Passing context keys for memory injection in before_model_callback

Changes:

  • Add metadata field to LlmRequest model
  • Add metadata field to InvocationContext model
  • Add metadata parameter to Runner.run_async() and related methods
  • Propagate metadata from InvocationContext to LlmRequest in base_llm_flow
  • Add unit tests for metadata functionality

Closes #2978


Link to Issue or Description of Change

  1. Link to an existing issue:
  1. Description:

Problem:
Currently, there is no official way to pass per-request metadata (such as user_id, trace_id, or memory context keys) from Runner.run_async() to callbacks like before_model_callback.

Solution:
Add a metadata parameter to Runner.run_async() that flows through InvocationContext to LlmRequest. This provides a clean, official API for passing request-specific context to callbacks without workarounds.

Testing Plan

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.

======================= 35 passed, 22 warnings in 2.79s ========================

Test cases added:

  • test_new_invocation_context_with_metadata
  • test_new_invocation_context_without_metadata
  • test_run_async_passes_metadata_to_invocation_context
  • test_metadata_field_in_invocation_context
  • test_metadata_field_in_llm_request
  • test_llm_request_without_metadata

Manual End-to-End Tests:

  from google.adk.runners import Runner

  # Pass metadata when calling run_async
  async for event in runner.run_async(
      user_id="user123",
      session_id="session456",
      new_message=content,
      metadata={"experiment_id": "exp-001", "variant": "B"}
  ):
      # metadata is accessible in before_model_callback via llm_request.metadata
      pass

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.
  • Any dependent changes have been merged and published in downstream modules.

Additional context

This PR addresses the need discussed in issue #2978 for a way to pass per-request context to callbacks.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @donggyun112, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a long-standing need to pass request-specific contextual data throughout the agent's execution. By introducing a metadata parameter to the primary run_async method and ensuring its propagation through key internal models, the change provides a clean and official mechanism for callbacks and other components to access relevant information like user IDs or trace IDs, thereby improving the flexibility and debuggability of agent interactions.

Highlights

  • Metadata Parameter for Runner.run_async(): Introduced a new metadata parameter to the Runner.run_async() method, allowing arbitrary, per-request contextual information to be passed into the agent execution pipeline.
  • Metadata Propagation: Ensured that the metadata is correctly propagated from Runner.run_async() through the InvocationContext and ultimately to the LlmRequest model, making it accessible during the invocation lifecycle, especially within callbacks like before_model_callback.
  • Enhanced Callback Context: This change enables use cases such as passing user_id, trace_id, or memory context keys directly to callbacks, providing a standardized and official API for request-specific context.
  • Comprehensive Unit Tests: Added a new suite of unit tests to verify the correct handling, propagation, and accessibility of the metadata field within InvocationContext, LlmRequest, and the Runner's execution flow.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the core [Component] This issue is related to the core interface and implementation label Dec 20, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a metadata parameter to Runner.run_async() to allow passing per-request context through the execution pipeline. The changes are well-implemented, consistently propagating the new parameter through InvocationContext and LlmRequest. The addition of unit tests is great. I've suggested one improvement to a test case to make it more robust by explicitly verifying the metadata propagation instead of just checking for successful execution.

Comment on lines 1088 to 1109
async def test_run_async_passes_metadata_to_invocation_context(self):
"""Test that run_async correctly passes metadata to invocation context."""
session = await self.session_service.create_session(
app_name=TEST_APP_ID, user_id=TEST_USER_ID, session_id=TEST_SESSION_ID
)

test_metadata = {"memctx_key": "memory123", "request_id": "req456"}
events = []

async for event in self.runner.run_async(
user_id=TEST_USER_ID,
session_id=TEST_SESSION_ID,
new_message=types.Content(
role="user", parts=[types.Part(text="Hello")]
),
metadata=test_metadata,
):
events.append(event)

# The test passes if run_async completes without error
# Metadata propagation is verified by the implementation
assert len(events) > 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This test currently acts as a smoke test, verifying that run_async completes without error but not actually checking if the metadata is propagated correctly. To make this test more robust, I suggest using a before_model_callback to intercept the LlmRequest and assert that the metadata field contains the expected values. This provides a much stronger guarantee that the feature works as intended.

  async def test_run_async_passes_metadata_to_invocation_context(self):
    """Test that run_async correctly passes metadata to invocation context."""
    from google.adk.models.llm_response import LlmResponse

    session = await self.session_service.create_session(
        app_name=TEST_APP_ID, user_id=TEST_USER_ID, session_id=TEST_SESSION_ID
    )

    test_metadata = {"memctx_key": "memory123", "request_id": "req456"}
    received_metadata = {}

    def before_model_callback(callback_context, llm_request):
      nonlocal received_metadata
      if llm_request.metadata:
        received_metadata = llm_request.metadata
      # Return a response to short-circuit the actual model call to speed up the test
      # and avoid reliance on a real model backend.
      return LlmResponse(
          content=types.Content(parts=[types.Part(text="mock response")])
      )

    self.root_agent.before_model_callback = before_model_callback

    events = []
    async for event in self.runner.run_async(
        user_id=TEST_USER_ID,
        session_id=TEST_SESSION_ID,
        new_message=types.Content(
            role="user", parts=[types.Part(text="Hello")]
        ),
        metadata=test_metadata,
    ):
      events.append(event)

    assert len(events) > 0
    assert received_metadata == test_metadata

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively adds a metadata parameter to Runner.run_async() and propagates it through the execution pipeline, which is a great feature for passing request-specific context. The changes are well-implemented across the different files, and the inclusion of unit tests is appreciated. I have a couple of suggestions to improve the new tests for better robustness and to align with common Python styling practices.

Comment on lines 1088 to 1109
async def test_run_async_passes_metadata_to_invocation_context(self):
"""Test that run_async correctly passes metadata to invocation context."""
session = await self.session_service.create_session(
app_name=TEST_APP_ID, user_id=TEST_USER_ID, session_id=TEST_SESSION_ID
)

test_metadata = {"memctx_key": "memory123", "request_id": "req456"}
events = []

async for event in self.runner.run_async(
user_id=TEST_USER_ID,
session_id=TEST_SESSION_ID,
new_message=types.Content(
role="user", parts=[types.Part(text="Hello")]
),
metadata=test_metadata,
):
events.append(event)

# The test passes if run_async completes without error
# Metadata propagation is verified by the implementation
assert len(events) > 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current test for run_async metadata propagation is a bit weak. The comment "Metadata propagation is verified by the implementation" indicates that the test isn't actually verifying the behavior. The test should explicitly check that the metadata is passed down correctly. You can achieve this by mocking _setup_context_for_new_invocation and asserting it was called with the expected metadata.

You'll need to add from unittest.mock import patch to your imports.

  async def test_run_async_passes_metadata_to_invocation_context(self):
    """Test that run_async correctly passes metadata to invocation context."""
    await self.session_service.create_session(
        app_name=TEST_APP_ID, user_id=TEST_USER_ID, session_id=TEST_SESSION_ID
    )

    test_metadata = {"memctx_key": "memory123", "request_id": "req456"}

    with patch.object(self.runner, '_setup_context_for_new_invocation', wraps=self.runner._setup_context_for_new_invocation) as mock_setup_context:
        events = []
        async for event in self.runner.run_async(
            user_id=TEST_USER_ID,
            session_id=TEST_SESSION_ID,
            new_message=types.Content(
                role="user", parts=[types.Part(text="Hello")]
            ),
            metadata=test_metadata,
        ):
          events.append(event)

        # The test passes if run_async completes without error
        # and we can verify the metadata was passed.
        assert len(events) > 0
        mock_setup_context.assert_called_once()
        # Check the keyword arguments passed to the mocked method
        _, kwargs = mock_setup_context.call_args
        assert kwargs.get("metadata") == test_metadata


def test_metadata_field_in_invocation_context(self):
"""Test that InvocationContext model accepts metadata field."""
from google.adk.agents.invocation_context import InvocationContext
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This local import is redundant as InvocationContext is already imported at the top of the file. Please remove it. Similarly, the local imports for LlmRequest on lines 1137 and 1149 should be moved to the top of the file to adhere to PEP 8 style guidelines.

@donggyun112 donggyun112 force-pushed the feat/add-metadata-parameter branch 2 times, most recently from 420d48d to f49186c Compare December 20, 2025 09:39
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively adds support for passing per-request metadata through the agent execution pipeline by introducing a metadata parameter to Runner.run_async(). The changes are well-implemented, propagating the metadata through InvocationContext and LlmRequest as intended. The new functionality is also well-documented with clear docstrings and comprehensively covered by unit tests. I have one suggestion to improve the robustness of the implementation by preventing potential side effects from mutable metadata objects.

live_request_queue=live_request_queue,
run_config=run_config,
resumability_config=self.resumability_config,
metadata=metadata,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To prevent accidental modification of the original metadata dictionary by the caller of run_async, it's a good practice to work with a copy of the metadata. Since dictionaries are mutable, any changes made to metadata within the runner's logic would also affect the caller's original dictionary. Creating a shallow copy here isolates the runner's execution context from the caller. This is especially important as run_async is an async generator, and the caller might modify the metadata dictionary while iterating over the yielded events.

Suggested change
metadata=metadata,
metadata=metadata.copy() if metadata is not None else None,

Add support for passing per-request metadata through the agent execution
pipeline. This enables use cases like:
- Passing user_id, trace_id, or session context to callbacks
- Enabling memory injection in before_model_callback
- Supporting request-specific context without using ContextVar workarounds

Changes:
- Add `metadata` field to LlmRequest model
- Add `metadata` field to InvocationContext model
- Add `metadata` parameter to Runner.run_async() and related methods
- Propagate metadata from InvocationContext to LlmRequest in base_llm_flow
- Add unit tests for metadata functionality

Closes google#2978
@donggyun112 donggyun112 force-pushed the feat/add-metadata-parameter branch from f49186c to 3fd6c93 Compare December 20, 2025 09:43
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a metadata parameter to Runner.run_async() to allow passing per-request context through the execution pipeline, which is a great feature for enhancing traceability and context management. The implementation correctly propagates the metadata through InvocationContext and LlmRequest. My review includes a couple of suggestions to improve robustness: one is to clarify the shallow copy behavior of the metadata in the docstring to prevent potential misuse, and the other is to add a unit test to verify this copy behavior and prevent future regressions. Overall, the changes are well-implemented and the added tests cover the main functionality.

Comment on lines +421 to +423
metadata: Optional per-request metadata that will be passed to callbacks.
This allows passing request-specific context such as user_id, trace_id,
or memory context keys to before_model_callback and other callbacks.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To prevent potential subtle bugs, it's a good practice to clarify the copy behavior of the metadata dictionary in the docstring. Since a shallow copy is performed, modifications to nested mutable objects within a callback will affect the original object passed by the caller. Please add a note about this to help users of the API understand this behavior and avoid unexpected side effects. For example, you could add: Note: A shallow copy is made of this dictionary, so changes to nested mutable objects will affect the original object.

Comment on lines +1043 to +1044
class TestRunnerMetadata:
"""Tests for Runner metadata parameter functionality."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The test suite for metadata is comprehensive for data propagation. It would be beneficial to also add a test case that explicitly verifies the behavior of the shallow copy of the metadata dictionary.
Specifically, a test could:

  1. Pass a metadata dictionary with a nested mutable object (e.g., a dictionary or a list).
  2. Modify both a top-level value and a value within the nested object inside a callback.
  3. Assert that the top-level change does not affect the original dictionary passed by the caller.
  4. Assert that the change to the nested object does affect the original object.

This would ensure that the isolation behavior is well-understood and prevent future regressions.

- Add note in docstring about shallow copy behavior for nested objects
- Add test_metadata_shallow_copy_isolation to verify:
  - Top-level changes are isolated from original dict
  - Nested object modifications affect original (shallow copy)
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable metadata parameter to Runner.run_async(), allowing per-request context to be passed through the execution pipeline. The implementation is clean, well-documented, and accompanied by a comprehensive set of unit tests that cover the new functionality, including the nuances of the shallow copy behavior.

I've found one minor bug where an empty dictionary passed as metadata would be incorrectly treated as None, and I've left a specific comment with a suggested fix.

For completeness, you might also consider adding the metadata parameter to the run() and run_debug() methods in runners.py. Since they are wrappers around run_async(), updating them would provide a more consistent API.

"""
run_config = run_config or RunConfig()
# Create a shallow copy to isolate from caller's modifications
metadata = metadata.copy() if metadata else None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The check if metadata is truthy-based, which means an empty dictionary {} passed by the user will be incorrectly converted to None. This can lead to unexpected behavior if the user intends to pass an empty, mutable metadata object that might be populated by callbacks. You should use if metadata is not None to correctly handle an empty dictionary while still creating a shallow copy.

Suggested change
metadata = metadata.copy() if metadata else None
metadata = metadata.copy() if metadata is not None else None

- Change `if metadata` to `if metadata is not None` for truthiness check
- Empty dict {} was incorrectly converted to None due to falsy check
- Add test_empty_metadata_dict_not_converted_to_none to prevent regression
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a metadata parameter to Runner.run_async() to allow passing per-request contextual data through the execution pipeline. The changes are well-implemented, adding the metadata field to InvocationContext and LlmRequest models and correctly propagating it through the relevant methods. The addition of comprehensive unit tests, which cover functionality, edge cases like empty metadata, and the nuances of the shallow copy mechanism, is particularly commendable. The code is clean, well-documented, and the feature is a valuable addition for enhancing callback capabilities. Overall, this is a high-quality contribution with no issues found.

@ryanaiagent ryanaiagent self-assigned this Dec 22, 2025
@ryanaiagent
Copy link
Collaborator

Hi @donggyun112 , Thank you for your contribution! We appreciate you taking the time to submit this pull request. Your PR has been received by the team and is currently under review. We will provide feedback as soon as we have an update to share.

@ryanaiagent ryanaiagent added the needs-review [Status] The PR is awaiting review from the maintainer label Dec 23, 2025
@ryanaiagent
Copy link
Collaborator

Hi @wyf7107 , can you please review this.

@ryanaiagent
Copy link
Collaborator

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a metadata parameter to Runner.run_async() to allow passing per-request context through the execution pipeline. The changes are well-implemented across Runner, InvocationContext, and LlmRequest to propagate this metadata. The addition of a shallow copy for the metadata dictionary in Runner.run_async() is a good defensive measure to prevent unintended side effects. The accompanying unit tests are thorough, covering various scenarios including edge cases like empty metadata and verifying the shallow copy behavior, which ensures the new feature is robust. The code is clean, well-documented, and the changes are consistent with the project's structure. For consistency, you might consider adding the metadata parameter to the run(), run_live(), and run_debug() methods in a follow-up PR. Overall, this is an excellent contribution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core [Component] This issue is related to the core interface and implementation needs-review [Status] The PR is awaiting review from the maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add metadata field to LLMRequest for simplified user-specific data handling in callbacks

3 participants