If you liked what you've learned so far, dive in!
Subscribe to get access to this tutorial plus
video, code and script downloads.
With a Subscription, click any sentence in the script to jump to that part of the video!
Login SubscribeHey! Flysystem is now talking to S3! We know this because we can see the article_image
directory and all the files inside of it. But when we went back to the homepage and refreshed, nothing worked!
Check out the image src
URL: this is definitely wrong, because this now needs to point to S3 directly. But! Things get even more interesting if you go back to the S3 page and refresh. We have a media/
directory! And if you dig, there are the thumbnails! Woh!
This means that this thumbnail request did successfully get processed by a Symfony route and controller and it did correctly grab the source file from S3, thumbnail it and write it back to S3. That's freaking cool! And it worked because we already made LiipImagineBundle play nicely with Flysystem. We told the "loader" to use Flysystem - that's the thing that downloads the source image when it needs to thumbnail it - and the resolver to use Flysystem, which is the thing that actually saves the final image.
So if our system is working so awesomely... why don't the images show up? It's because of the hostname in front of the images: it's pointing at our local server, but it should be pointing at S3.
Click any of the images on S3. Here it is: every object in S3 has its own, public URL. Well actually, every object has a URL, but whether or not anyone can access that URL is another story. More on that later. I'm going to copy the very beginning of that, and then go open services.yaml
. Earlier, we created a parameter called uploads_base_url
. LiipImagineBundle uses this to prefix every URL that it renders. The current value includes 127.0.0.1:8000
because that's our SITE_BASE_URL
environment variable value. That worked fine when things were stored locally... but not anymore!
Change this to https://s3.amazonaws.com/
and then our bucket name, which is already available as an environment variable: %env()%
, then go copy AWS_S3_ACCESS_BUCKET
, and paste.
... lines 1 - 5 | |
parameters: | |
... lines 7 - 8 | |
uploads_base_url: 'https://s3.amazonaws.com/%env(AWS_S3_BUCKET_NAME)%' | |
... lines 10 - 61 |
This is our new base URL. What about the uploads_dir_name
parameter? We're not using that at all anymore! Trash it.
Ok, let's try it! Refresh and... it actually works! I mean... of course, it works!
There's one other path we need to fix: the absolute path to uploaded assets that are not thumbnailed. Open up src/Service/UploaderHelper.php
and find the getPublicPath()
method... there it is. This is a super-handy method: it allows us to get the full, public path to any uploaded file. This $publicAssetBaseUrl
property... if you look on top, it comes from an argument called $uploadedAssetsBaseUrl
. And in services.yaml
, that is bound to the uploads_base_url
parameter... that we just set!
There are a few layers, but it means that, in UploaderHelper
the $publicAssetBaseUrl
property is now the long S3 URL, which is perfect!
Head back to down getPublicPath()
. Even before we changed uploads_base_url
to point to S3, we were already setting it to the absolute URL to our domain... which means that this method already had a subtle bug!
Check it out: the original purpose of this code was to use $this->requestStackContext->getBasePath()
to "correct" our paths in case our site was deployed under a sub-directory of a domain - like https://space.org/thespacebar
. In that case, getBasePath()
would equal thespacebar
and would automatically prefix all of our URLs.
But ever since we started including the full domain in $publicAssetBaseUrl
, this would create a broken URL! We could remove this. Or, to make it still work if $publicAssetsBaseUrl
happens to not include the domain, above this, set $fullPath =
, copy the path part, replace that with $fullPath
, and paste.
... lines 1 - 12 | |
class UploaderHelper | |
{ | |
... lines 15 - 60 | |
public function getPublicPath(string $path): string | |
{ | |
$fullPath = $this->publicAssetBaseUrl.'/'.$path; | |
... lines 64 - 69 | |
return $this->requestStackContext | |
->getBasePath().$fullPath; | |
} | |
... lines 73 - 127 | |
} |
Then, if strpos($fullPath, '://') !== false
, we know that $fullpath
is already absolute. In that case, return it! That's what our code is doing. But if it's not absolute, we can keep prefixing the sub-directory.
... lines 1 - 12 | |
class UploaderHelper | |
{ | |
... lines 15 - 60 | |
public function getPublicPath(string $path): string | |
{ | |
$fullPath = $this->publicAssetBaseUrl.'/'.$path; | |
// if it's already absolute, just return | |
if (strpos($fullPath, '://') !== false) { | |
return $fullPath; | |
} | |
// needed if you deploy under a subdirectory | |
return $this->requestStackContext | |
->getBasePath().$fullPath; | |
} | |
... lines 73 - 127 | |
} |
Hey! The files are uploading to S3 and our public paths are pointing to the new URLs perfectly. Next, we can simplify! Remember how we have one public filesystem and one private filesystem? With S3, we only need one.
Hello ! :-)
I don't know why my imagine bundle is still not able to write in my S3 bucket ?
Everything else is working fine as a strictly followed the instructions.
Could that come from FlySystem v2 or any other difference on packages versions ?
I always used the latest versions for all packages used in that tutorial.
OK I think I found the problem !
Now, when creating (or editing) a bucket, there is an option called Object Ownership with which you can set ACLs disabled (which was recommended) or ACLs enabled.
Setting ACLs enabled fixed my problem, because it allows you to set the visibility of a bucket object when uploading it.
The object being now publicly visible, imagine bundle can load it AND save it back to S3 with the public visibility.
Hope I am not too wrong here :-D
Hey there,
Nice! You sorted it out by yourself. AWS access config it's always a pain. Basically, what you want is to make your images publicly visible, and the AWS account that upload files should have write permissions on your bucket
Cheers!
Hey Team,
I succeeded to use AWS S3 to store image and it's working fine on local with league/flysystem-bundle and liip/imagine-bundle. On Heroku (hobby dynos), the upload is ok but there is a problem on generate thumbnail.
Do you know what I have to check ?
Cheers.
Hey Stephane
Do you see any exceptions? If it's a thumbnail generation, probably there GD lib or Imagick misconfiguration... but without any exception message it's pretty hard to say something.
Cheers!
Hey @Vladimir,
Thank you for your reply. I have a 404 not found message for image.
https://snowtrick-app.herok...
I have added the GD lib in composer.json already.
Perhaps it is a problem with Nginx server ? Or a router problem ?
On local, when I paste the path of the image :
https://localhost:8000/media/cache/resolve/image_thumb_home_xl/snowtrick-image/default-snowboard-trick.webp
is transform on
https://s3-image-snowtrick....
But when I paste the path of image on Heroku app :
https://snowtrick-app.herok...
there is no automatic switch
Cheers.
Yeah probably it can be nginx issue, Probably you have nginx rule for static files which throws 404 on not found file instead of sending it to php. Can you show your nginx configuration on heroku app?
Cheers!
location / {
# try to serve file directly, fallback to rewrite
try_files $uri @rewriteapp;
}
location @rewriteapp {
# rewrite all to index.php
rewrite ^(.*)$ /index.php/$1 last;
}
location ~ ^/index\.php(/|$) {
try_files @heroku-fcgi @heroku-fcgi;
# ensure that /index.php isn't accessible directly, but only through a rewrite
internal;
}
location ~* \.(css|js|jpg|png|webp)$ {
access_log off;
add_header Cache-Control public;
add_header Pragma public;
add_header Vary Accept-Encoding;
expires 1M;
}
try to add try_files $uri @rewriteapp;
to the location ~* \.(css|js|jpg|png|webp)$ {
block
This should help, I think
// composer.json
{
"require": {
"php": "^7.1.3",
"ext-iconv": "*",
"aws/aws-sdk-php": "^3.87", // 3.87.10
"composer/package-versions-deprecated": "^1.11", // 1.11.99
"knplabs/knp-markdown-bundle": "^1.7", // 1.7.1
"knplabs/knp-paginator-bundle": "^2.7", // v2.8.0
"knplabs/knp-time-bundle": "^1.8", // 1.9.0
"league/flysystem-aws-s3-v3": "^1.0", // 1.0.22
"league/flysystem-cached-adapter": "^1.0", // 1.0.9
"liip/imagine-bundle": "^2.1", // 2.1.0
"nexylan/slack-bundle": "^2.0,<2.2.0", // v2.1.0
"oneup/flysystem-bundle": "^3.0", // 3.0.3
"php-http/guzzle6-adapter": "^1.1", // v1.1.1
"sensio/framework-extra-bundle": "^5.1", // v5.2.4
"stof/doctrine-extensions-bundle": "^1.3", // v1.3.0
"symfony/asset": "^4.0", // v4.2.3
"symfony/console": "^4.0", // v4.2.3
"symfony/flex": "^1.9", // v1.17.6
"symfony/form": "^4.0", // v4.2.3
"symfony/framework-bundle": "^4.0", // v4.2.3
"symfony/orm-pack": "^1.0", // v1.0.6
"symfony/security-bundle": "^4.0", // v4.2.3
"symfony/serializer-pack": "^1.0", // v1.0.2
"symfony/twig-bundle": "^4.0", // v4.2.3
"symfony/validator": "^4.0", // v4.2.3
"symfony/web-server-bundle": "^4.0", // v4.2.3
"symfony/yaml": "^4.0", // v4.2.3
"twig/extensions": "^1.5" // v1.5.4
},
"require-dev": {
"doctrine/doctrine-fixtures-bundle": "^3.0", // 3.1.0
"easycorp/easy-log-handler": "^1.0.2", // v1.0.7
"fzaninotto/faker": "^1.7", // v1.8.0
"symfony/debug-bundle": "^3.3|^4.0", // v4.2.3
"symfony/dotenv": "^4.0", // v4.2.3
"symfony/maker-bundle": "^1.0", // v1.11.3
"symfony/monolog-bundle": "^3.0", // v3.3.1
"symfony/phpunit-bridge": "^3.3|^4.0", // v4.2.3
"symfony/profiler-pack": "^1.0", // v1.0.4
"symfony/var-dumper": "^3.3|^4.0" // v4.2.3
}
}
Even after updating the uploads_base_url parameter, I can't see the images because AWS S3 actually asks for an URL using the bucket name as a subdomain.
For anyone encountering this too, your parameter should look like:
<br />uploads_base_url: 'https://%env(AWS_S3_ACCESS_BUCKET)%.s3.amazonaws.com'<br />