fileWrite for S3 Virtual File System Implementation Problem (Assumes Public Bucket)

Description

None

Environment

Given I have an S3 bucket whose public access is blocked (see image)
and I have a mapping of this.mappings["/s3new"] = "s3://REDACTED:REDACTED@mybucketname";
and the foo/directory exists in the bucket
When I perform a fileWrite("/s3new/foo/delete_me_2.txt", "delete me now" );
Then I get an access denied message from S3
Whereas the same operation from AWS CLI (aws s3 cp foo.txt s3://mybucketname/foo/delete_me_2.txt , using the same credentials, succeeds

If I do not block public access on the bucket, then I am able to successfully upload from Lucee.

Here is the reason:

The (successful) AWS CLI request has the following request parameters (as seen in CloudTrail):

{ "bucketName":"mybucketname.s3.amazonaws.com", "Host":"mybucketname.s3.amazonaws.com", "key":"foo/delete_me_2.txt" }

The (unsuccessful) Lucee fileWrite request gives the following request parameters:

{ "bucketName":"mybucketname.s3.amazonaws.com", "Host":"mybucketname.s3.amazonaws.com:443", "x-amz-acl":"public-read", "key":"foo/delete_me_2.txt" }

The errant "x-amz-acl":"public-read" is the trouble.

I just got off of the phone with AWS support. Here’s the scoop:

The workaround of unblocking public access is unacceptable (IMO): While I could still prevent the public, at large, from having public access (by virtue of not configuring the access in its policy), it would still be an odd/untidy configuration.

Worse, however, is that the "x-amz-acl":"public-read" yields public access on the object that’s written.

It seems like Lucee is defaulting to poor security here, with no way to fix it (or am I missing something?).

Related:

Tangent: Why does Lucee use jetS3t? It doesn’t seem the be the most lively project out there for java/s3.

Attachments

1
  • 26 Jul 2022, 06:12 pm

relates to

Activity

Show:

Michael Offner 3 June 2024 at 16:42

as a workarounf best use the S3 funcitons directly, they give you much more freedom to interact with S3.

Pothys - MitrahSoft 11 August 2022 at 09:46
Edited

I've checked this ticket with the lucee latest version 5.3.10.51-SNAPSHOT. I enabled public access blocked on the s3 bucket. Yes, When I write a file using fileWrite("/s3/.../.../test.txt", "test" ). I got an error like Access Denied; error-codeAccessDenied.

Seems I can't perform filewrite() operation in ACF too. It throws an error like Error occurred while performing write: java.io.FileNotFoundException:s3://…/…/test.txt

But I can perform fileread(), filecopy() and fildelete() operations and I can write the file successfully using s3write() in lucee.
Also, I can copy the file using the AWS CLI also

will confirm about this


Updated

I noticed if I enabled public access blocked on the s3 bucket. Then I write a file using fileWrite(). I got an error like Access Denied; error-codeAccessDenied. But the file was successfully uploaded to the s3 bucket.

Also if I copy the local file to the s3 bucket using filecopy(), it throws the error like Can't copy file [D:.......\test.txt] to [s3://.../..../test.txt]. Also it successfully copied the file to the s3 bucket

Seems as I mentioned above performing filecopy within the same bucket it successfully copies the file in the s3 bucket without any error.

Details

Assignee

Reporter

Priority

Labels

New Issue warning screen

Before you create a new Issue, please post to the mailing list first https://dev.lucee.org

Once the issue has been verified, one of the Lucee team will ask you to file an issue

Sprint

Affects versions

Created 26 July 2022 at 18:12
Updated 6 March 2025 at 11:30

Flag notifications