Iron.io Queues for Laravel 5.1

I wrote a post about using Iron.io’s Push Queues with Laravel 4.2, and of course it became obsolete a couple weeks after upgrading to Laravel 5.1.

Since Iron.io were the only ones to be using “Push” queues, it added complexity to the other queue types, and the Laravel gods decided to give it the ax.

A “Push” queue was essentially a queue job, that would then get a ping back from Iron.io to a queue worker that would handle the queued job. Laravel would accept the ping from Iron.io, and handled accordingly with the Queue::marshal() command (deprecated in Laravel 5.1)

So here is how we re-setup Iron.io Queues in Laravel 5.1.

Sign up for a free Iron.io account #

Create a new Iron.io Queue #

Click on the credentials icon. This will show you your Project ID and Token.

Screen Shot 2015-08-05 at 1.19.26 PM.png

Create a new Iron.io Queue #

Within your Iron.io account, create a new Queue.

2015-11-17_10-17-12.png

Select “pull” as the Queue Type.

2015-11-17_10-18-34.png

Update Laravel’s queue driver to use iron #

Add your Iron.io credentials in the /config/queue.php file.

'iron' => array(
    'driver'  => 'iron',
    'host'    => 'mq-aws-us-east-1.iron.io',
    'token'   => 'xxx',
    'project' => 'xxx',
    'queue'   => 'Your Queue Name',
    'encrypt' => true,
),

Set the Queue Driver to Iron in the .env file:

QUEUE_DRIVER=iron

For the local .env file, I kept to sync to make testing easier:

QUEUE_DRIVER=sync

Install iron.io dependency through composer #

I was still getting an 'IronMQ' not found error when trying to using version 3 of iron_mq.

Screen Shot 2015-11-17 at 10.13.02 AM.png

This time the solution was to use v2, by adding "iron-io/iron_mq": "2.*" into the /composer.json file.

Then, through the command line, in your laravel project’s root, run composer update to pull that in.

Push a Queue #

Let’s push a new task onto the queue. For us, we are going to be creating a task to export out the results. So in our app, when a visitor requests to export the results, we will add the following into our “export” controller.

$data = ['user_id' => $user_id =>, 'file' => $file];
$this->dispatch(new Export($data));

This will dispatch a new job named “Export,” and send with it the required data (in our case, the user_id and file name), to Iron.

Create Job Class #

Next we’ll create a Job Class to do the work of exporting the results. We can do that through artisan:

php artisan make:job Export --queued

This command will generate a new class in the app/Jobs directory, which I modified slightly to look like this.

<?php

namespace App\Jobs;

use Illuminate\Queue\SerializesModels;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Contracts\Bus\SelfHandling;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Support\Facades\Mail;

class ExportContacts extends Job implements SelfHandling, ShouldQueue
{
    use InteractsWithQueue, SerializesModels;

    protected $data;

    /**
     * Create a new job instance.
     *
     * @return void
     */
    public function __construct($data)
    {
        $this->data = $data;
    }

    /**
     * Execute the job.
     *
     * @return void
     */
    public function handle()
    {
        //a bunch of code to export out the results & notify user when it's completed
    }
}

Test Queue #

Before moving any farther, let’s check to make sure everything is working as expected. Working in your local env, let’s add something to the queue. In our case, we will trigger the export. Then, from the project’s root, run php artisan queue:work iron. This command will do nothing if the queue is empty, but if there’s an item on the queue it will fetch the item and attempt to execute it. In our case, we should see that the export get’s successfully processed.

With the queue working as expected locally, we can now sync this up with Iron.io.

Setting up Queue Listener with Supervisor #

Unlike the “push” queues, Iron.io will hold all of the queued jobs until our app requests them to be processed. It’s the Listener’s job to run new jobs as they are pushed onto the queue.

Supervisor is a process monitor for the Linux operating system, and will automatically restart your queue:listen or queue:work commands if they fail. To install Supervisor on Ubuntu, we use the following command:

sudo apt-get install supervisor

After it is installed you need to configure it. At first you need to create configuration file

vi /etc/supervisor/conf.d/worker.conf

And add the following to the new worker.conf file.

[program:worker]
process_name=%(program_name)s_%(process_num)02d
command=php /PATH/TO/artisan queue:work iron --sleep=3 --tries=3 --daemon
autostart=true
autorestart=true
user=YOUR-USER
numprocs=2
redirect_stderr=true
stdout_logfile=/PATH/TO/worker.log
You should replace /PATH/TO/PROJECT/ROOT

You should replace YOUR-USER with your ssh user (i.e. root)

With the configuration file created, update the Supervisor configuration and start the processes using the following commands:

 sudo supervisorctl reread
 sudo supervisorctl update
 sudo supervisorctl start worker:*

Handling Failed Jobs #

I’ve found that debugging the jobs can be a bit of a pain in the ass. Things that work locally, sometimes fail when integrated with Iron. This could happen for a number of reasons, but I’ve found it to be helpful to receive an email when a failed event happens. You can do this through the AppServiceProvider.php file, within the boot function.

public function boot()
{
    \Queue::failing(function ($connection, $job, $data) {
        $info['data'] = $data;
        \Mail::send('emails.jobs.failed', $info, function($message) {
            $message->to('to@email.com')->subject('Job failed');
        });
    });
}

And you can trigger the failed job within Job itself, by throwing an exception:

 throw new \Exception('Job Failed');

What I haven’t figured out yet it how to pass the Exception message to the email notification. Hope to get that worked out soon as it would be very helpful with debugging.

Important note #

Since daemon queue workers are long-lived processes, they will not pick up changes in your code without being restarted. So, the simplest way to deploy an application using daemon queue workers is to restart the workers during your deployment script. You may gracefully restart all of the workers by including the following command in your deployment script:

 php artisan queue:restart

This command will gracefully instruct all queue workers to restart after they finish processing their current job so that no existing jobs are lost.

 
5
Kudos
 
5
Kudos

Now read this

Twilio Click-to-Call in Laravel

Twilio is a pretty powerful tool for making calls and sending SMS messages. When integrating into your marketing efforts, it’s just short of magic. I recently had a project, built on Laravel, where I needed to create a click-to-call... Continue →