If you liked what you've learned so far, dive in!
Subscribe to get access to this tutorial plus
video, code and script downloads.
With a Subscription, click any sentence in the script to jump to that part of the video!
Login SubscribeWhen you start handling things asynchronously, thinking about what happens when code fails is even more important! Why? Well, when you handle things synchronously, if something fails, typically, the whole process fails, not just half of it. Or, at least, you can make the whole process fail if you need to.
For example: pretend all our code is still synchronous: we save the ImagePost
to the database, but then, down here, adding Ponka to the image fails... because she's napping. Right now, that would result in half of the work being done... which, depending on how sensitive your app is, may or may not be a huge deal. If it is, you can solve it by wrapping all of this in a database transaction.
Thinking about how things will fail - and coding defensively when you need to - is just a healthy programming practice.
But this all changes when some code is async! Think about it: we save the ImagePost
to the database, AddPonkaToImage
is sent to the transport and the response is successfully returned. Then, a few seconds later, our worker processes that message and, due to a temporary network problem, the handler throws an exception!
That's... not a great situation. The user thinks everything was ok because they didn't see an error. And now we have an ImagePost
in the database... but Ponka will never be added to it. Ponka is furious.
The point is: when you send a message to a transport, we really need to make sure that the message is eventually processed. If it's not, it could lead to some weird conditions in our system.
So let's start making our code fail to see what happens! Inside AddPonkaToImageHandler
, right before the real code runs, say if rand(0, 10) < 7
, then throw a new \Exception()
with:
I failed randomly!!!!
... lines 1 - 13 | |
class AddPonkaToImageHandler implements MessageHandlerInterface, LoggerAwareInterface | |
{ | |
... lines 16 - 30 | |
public function __invoke(AddPonkaToImage $addPonkaToImage) | |
{ | |
... lines 33 - 46 | |
if (rand(0, 10) < 7) { | |
throw new \Exception('I failed randomly!!!!'); | |
} | |
... lines 50 - 56 | |
} | |
} |
Let's see what happens! First, go restart the worker:
php bin/console messenger:consume -vv
Then I'll clear the screen and... let's upload! How about five photos? Go back over to see what's happening! Whoa! A lot is happening. Let's pull this apart.
The first message was received and handled. The second message was received and also handled successfully. The third message was received but an exception occurred while handling it: "I failed randomly!". Then it says: "Retrying - retry #1" followed by "Sending message". Yea, because it failed, Messenger automatically "retries" it... which literally means that it sends that message back to the queue to be processed later! One of these "Received message" logs down here is actually that message being received for a second time, thanks to the retry. The cool thing is... eventually... all the messages were handled successfully! That's why retries rock. We can see this when we refresh: everyone has a Ponka photo... even though some of these failed at first.
But... let's try this again... because that example didn't show the most interesting case. I'll select all the photos this time... oh, but first, let's clear the screen on our worker terminal. Ok, upload, then... move over.
Here we go: this time... thanks to randomness, we're seeing a lot more failures. We see that a couple of messages failed and were sent for retry #1. Then, some of those messages failed again and were sent for retry #2! And... yea! They failed yet again and were sent for retry #3. Finally... oh yes, perfect: after being attempted once and retried again 3 more times, one of the messages still failed. This time, instead of sending for retry #4, it says:
Rejecting AddPonkaToImage (removing from transport)
Here's what's going on: by default, Messenger will retry a message three times. If it still fails, it's finally removed from the transport and the message is lost permanently. Well... that's not totally true... and there's a bit more going on here than it seems at first.
Next, if you look closely... these retries are delayed at an increasing level. Let's learn why and how to take complete control over how your messages are retried.
Hey Fernando A.!
Yes, in Symfony 4.4.12 and higher, only the most recent RedeliveryStamp is kept precisely due to this problem - https://github.com/symfony/...
However, in 5.2.0, there is a new ErrorDetailsStamp which tries to keep a history of past failures with this message: https://github.com/symfony/...
I don't know if you had a question... and if you did... if I answered it - so let me know :).
Cheers!
Hi, sorry for not specifying.
I was having an error on rabbitMQ when I was triying to send a message on a third retry, the error was "Table to large" and I concluded that the reason was because the stamp array was really big.
Im using my custom serializer, so the method to include ALL the stamps was causing this problema.
So, to solve this I just included on the custom serializer the LAST RedeliveryStamp, and the problem was solved, no more "Table to large" errors.
There is another problem when you make a query to database and it fails, you will be getting "Entity manager closed" error. I discovered that adding "--failure-limit=1" to worker command was the "official solution" for the problem, so every time that a failure occurs, the message is send to for retrying and worker is closed, and if you have supervisor, a new worker will be created, with a new db connection.
Thank you for your videos, I learn how to use messenger with your tutorials, the best $12 ever spent.
Thanks for your kind words Fernando! I'm glad to hear that you could fix your problem. Cheers!
// composer.json
{
"require": {
"php": "^7.1.3",
"ext-ctype": "*",
"ext-iconv": "*",
"composer/package-versions-deprecated": "^1.11", // 1.11.99
"doctrine/annotations": "^1.0", // v1.8.0
"doctrine/doctrine-bundle": "^1.6.10", // 1.11.2
"doctrine/doctrine-migrations-bundle": "^1.3|^2.0", // v2.0.0
"doctrine/orm": "^2.5.11", // v2.6.3
"intervention/image": "^2.4", // 2.4.2
"league/flysystem-bundle": "^1.0", // 1.1.0
"phpdocumentor/reflection-docblock": "^3.0|^4.0", // 4.3.1
"sensio/framework-extra-bundle": "^5.3", // v5.3.1
"symfony/console": "4.3.*", // v4.3.2
"symfony/dotenv": "4.3.*", // v4.3.2
"symfony/flex": "^1.9", // v1.18.7
"symfony/framework-bundle": "4.3.*", // v4.3.2
"symfony/messenger": "4.3.*", // v4.3.4
"symfony/property-access": "4.3.*", // v4.3.2
"symfony/property-info": "4.3.*", // v4.3.2
"symfony/serializer": "4.3.*", // v4.3.2
"symfony/validator": "4.3.*", // v4.3.2
"symfony/webpack-encore-bundle": "^1.5", // v1.6.2
"symfony/yaml": "4.3.*" // v4.3.2
},
"require-dev": {
"easycorp/easy-log-handler": "^1.0.7", // v1.0.7
"symfony/debug-bundle": "4.3.*", // v4.3.2
"symfony/maker-bundle": "^1.0", // v1.12.0
"symfony/monolog-bundle": "^3.0", // v3.4.0
"symfony/stopwatch": "4.3.*", // v4.3.2
"symfony/twig-bundle": "4.3.*", // v4.3.2
"symfony/var-dumper": "4.3.*", // v4.3.2
"symfony/web-profiler-bundle": "4.3.*" // v4.3.2
}
}
Hi, you can only get the las redelivery stamp using
$allStamps = [$envelope->last(RedeliveryStamp::class)];
So, with this method you will not getting "Table to large" on rabbitmq.