Skip to content

Conversation

@ebati
Copy link

@ebati ebati commented Jan 14, 2026

Fix duplicate conditions in S3 presigned POST policy generation

Description

During testing with Garage (a stricter S3-compatible object store alternative to MinIO), we discovered that the generate_presigned_post policy contains duplicate conditions. This occurs because the boto3 library automatically includes certain conditions, while the application code was manually adding them as well.

The generated policy contains redundant bucket, key, and field conditions:

{
  "expiration": "<exp date>",
  "conditions": [
    { "bucket": "<bucket name>" },          // ← duplicate
    ["content-length-range", 1, 24928],
    { "Content-Type": "image/jpeg" },
    { "key": "<key value>" },               // ← duplicate
    { "bucket": "<bucket name>" },          // ← duplicate
    { "key": "<key value>" },               // ← duplicate
    { "x-amz-algorithm": "AWS4-HMAC-SHA256" },
    { "x-amz-credential": "<cred>" },
    { "x-amz-date": "<date>" }
  ]
}

According to the boto3 documentation:

  • bucket related conditions should not be included in the conditions parameter
  • key related conditions and fields are filled out for you and should not be included in the Fields or Conditions parameter

This fix removes the manually-set bucket and key from conditions and the key from fields, letting boto3 handle them. This approach aligns with the boto3 implementation.

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • Feature (non-breaking change which adds functionality)
  • Improvement (change that would cause existing functionality to not work as expected)
  • Code refactoring
  • Performance improvements
  • Documentation update

Screenshots and Media (if applicable)

Test Scenarios

References


Note

Streamlines presigned POST policy creation to prevent duplicate conditions and align with boto3 behavior.

  • In generate_presigned_post, remove manual bucket/key entries from Conditions and stop setting key in Fields; rely on boto3 to supply them
  • Fields now only includes Content-Type and bucket; Conditions limited to content-length-range and Content-Type

Written by Cursor Bugbot for commit 4143cb2. This will update automatically on new commits. Configure here.

Summary by CodeRabbit

  • Bug Fixes
    • Improved storage handling in file upload operations by refining presigned URL generation logic for AWS S3 integration.

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings January 14, 2026 21:04
@CLAassistant
Copy link

CLAassistant commented Jan 14, 2026

CLA assistant check
All committers have signed the CLA.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 14, 2026

📝 Walkthrough

Walkthrough

The generate_presigned_post function in the storage settings module was refactored to simplify AWS presigned POST generation by adding bucket to the Fields payload, removing explicit bucket condition checks, and moving key specification to the standard Key parameter instead of conditional inline logic.

Changes

Cohort / File(s) Summary
AWS Presigned POST Refactoring
apps/api/plane/settings/storage.py
Added bucket field to presigned POST Fields; removed bucket condition check from Conditions array; simplified key handling by removing conditional logic that previously set Fields["key"] based on object_name—key is now only supplied via Key parameter

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

🐰 A simpler path for files to soar,
Bucket and key need conditions no more,
AWS speaks in a cleaner tongue,
The storage refactor—chef's kiss—well done! 🚀

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main fix: removing duplicate conditions from S3 presigned POST policy generation, which is the core change.
Description check ✅ Passed The description is comprehensive with detailed context, technical justification via boto3 documentation, problem examples, and a clear explanation of the fix. The type of change (Code refactoring) is properly selected.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request addresses duplicate conditions in the S3 presigned POST policy generation by removing manual addition of bucket and key conditions that boto3 automatically includes. The PR aims to align with boto3's documented best practices where bucket and key should not be manually specified in the Fields or Conditions parameters since they are automatically handled when passed as direct parameters to generate_presigned_post().

Changes:

  • Removed manual bucket condition from the conditions list
  • Removed conditional logic for handling key in both fields and conditions
  • Simplified the generate_presigned_post method to rely on boto3's automatic handling

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

fields = {"Content-Type": file_type}
fields = {
"Content-Type": file_type,
"bucket": self.aws_storage_bucket_name,
Copy link

Copilot AI Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the boto3 documentation and the PR description, bucket should not be included in the Fields parameter. The boto3 library automatically handles the bucket when it's passed as the Bucket parameter to generate_presigned_post(). Including it in fields will still cause duplicate bucket conditions in the generated policy. This line should be removed, leaving only the Content-Type field.

Suggested change
"bucket": self.aws_storage_bucket_name,

Copilot uses AI. Check for mistakes.
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documentation only mentions conditions and not fields for bucket.

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is being reviewed by Cursor Bugbot

Details

Your team is on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle for each member of your team.

To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

fields = {
"Content-Type": file_type,
"bucket": self.aws_storage_bucket_name,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bucket incorrectly added to fields instead of being removed

High Severity

The PR description states the intent is to remove manually-set bucket and key to let boto3 handle them. However, the code now adds "bucket": self.aws_storage_bucket_name to the fields dictionary. The bucket is not a valid form field in S3 POST requests—it's determined by the URL endpoint. boto3 handles bucket automatically by adding it to policy conditions. This extra form field could cause upload failures, particularly on stricter S3-compatible services like Garage that this PR is meant to support.

Fix in Cursor Fix in Web

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@apps/api/plane/settings/storage.py`:
- Around line 65-68: Remove the "bucket" entry from the Fields dictionary used
to build the S3 POST parameters: in the method that constructs the fields dict
(the variable named fields in settings/storage.py), delete the "bucket":
self.aws_storage_bucket_name line so bucket is no longer included in Fields;
rely on boto3 to handle bucket/key automatically and ensure Conditions remains
consistent with the remaining Fields.
🧹 Nitpick comments (1)
apps/api/plane/settings/storage.py (1)

86-88: Consider using log_exception for consistency.

Other methods in this class (generate_presigned_url, get_object_metadata, etc.) use log_exception(e) for error handling, but this method uses print(). For consistent logging and observability, consider aligning with the established pattern.

♻️ Suggested change
         except ClientError as e:
-            print(f"Error generating presigned POST URL: {e}")
+            log_exception(e)
             return None
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8399f64 and 4143cb2.

📒 Files selected for processing (1)
  • apps/api/plane/settings/storage.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-23T14:18:32.899Z
Learnt from: dheeru0198
Repo: makeplane/plane PR: 8339
File: apps/api/plane/db/models/api.py:35-35
Timestamp: 2025-12-23T14:18:32.899Z
Learning: Django REST Framework rate limit strings are flexible: only the first character of the time unit matters. Acceptable formats include: "60/s", "60/sec", "60/second" (all equivalent), "60/m", "60/min", "60/minute" (all equivalent), "60/h", "60/hr", "60/hour" (all equivalent), and "60/d", "60/day" (all equivalent). Abbreviations like "min" are valid and do not need to be changed to "minute". Apply this guidance to any Python files in the project that configure DRF throttling rules.

Applied to files:

  • apps/api/plane/settings/storage.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Cursor Bugbot
  • GitHub Check: CodeQL analysis (python)
  • GitHub Check: Agent
🔇 Additional comments (1)
apps/api/plane/settings/storage.py (1)

70-73: LGTM on conditions array.

The removal of explicit bucket and key conditions is correct. boto3's generate_presigned_post automatically injects these based on the Bucket and Key parameters (lines 79-80), so manually adding them would create duplicates that strict S3-compatible stores like Garage reject.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines +65 to +68
fields = {
"Content-Type": file_type,
"bucket": self.aws_storage_bucket_name,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for generate_presigned_post usage and tests in the codebase
rg -n 'generate_presigned_post' --type=py -A 10 -B 2

Repository: makeplane/plane

Length of output: 17121


🌐 Web query:

boto3 s3 generate_presigned_post fields parameter bucket documentation

💡 Result:

Short answer: The boto3 S3 client method generate_presigned_post(Bucket, Key, Fields=None, Conditions=None, ExpiresIn=3600) requires Bucket and Key. The Fields parameter is a dict of pre-filled form fields you want included in the POST (examples accepted: acl, Cache-Control, Content-Type, Content-Disposition, Content-Encoding, Expires, success_action_redirect / success_action_status, and x-amz-meta-*). Bucket- and Key-related conditions/fields are handled for you and should not be included in Fields or in Conditions; if you include an element in Fields you must also include a matching Condition. See the official docs. [1][2]

Sources

  • Boto3 generate_presigned_post documentation. [1]
  • Botocore / boto3 S3 client reference (generate_presigned_post parameters and notes). [2]

Remove bucket from Fields dictionary.

Per AWS boto3 documentation, bucket and key are handled automatically by boto3 and should not be included in the Fields parameter. Including bucket in Fields violates AWS S3 POST requirements and contradicts the PR objective to remove duplicates. Additionally, if an element is included in Fields, a matching condition must be present—bucket is not in the Conditions array, creating an inconsistency.

🤖 Prompt for AI Agents
In `@apps/api/plane/settings/storage.py` around lines 65 - 68, Remove the "bucket"
entry from the Fields dictionary used to build the S3 POST parameters: in the
method that constructs the fields dict (the variable named fields in
settings/storage.py), delete the "bucket": self.aws_storage_bucket_name line so
bucket is no longer included in Fields; rely on boto3 to handle bucket/key
automatically and ensure Conditions remains consistent with the remaining
Fields.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants