4

Building a Laravel Octane Application

In the previous chapters, we focused on installing, configuring, and using some of the features provided by Laravel Octane. We looked at the difference between Swoole and RoadRunner, the two application servers supported by Laravel Octane.

In this chapter, we will focus on each feature of Laravel Octane to discover its potential and understand how it can be used individually.

The goal of this chapter is to analyze the functionality of Laravel Octane in a realistic context.

To do that, we will build a sample dashboard application covering several aspects, such as configuring the application, creating the database table schema, and generating initial data.

Then, we will go on to implement specific routes for dashboard delivery and implement the data retrieval logic in the controller and the query in the model.

Then we will create a page in the sample dashboard application, where we will collect information from multiple queries. When we have to implement queries for retrieving data, generally, we focus on the logic and the methods for filtering, sorting, and selecting data. In this chapter, however, we will keep the logic as simple as possible to allow you to focus on other aspects, such as loading data efficiently thanks to executing parallel tasks, and we will apply some strategies to reduce the response time as much as possible (running the tasks in parallel reduces the overall response execution time).

In designing the application architecture, we also need to consider the things that could go wrong.

In the examples in the previous chapter, we analyzed each feature by considering what is called the happy path. The happy path is the default scenario that the user takes to achieve the desired result without encountering any errors.

In designing a real application, we must also think about all those cases that are not included in the happy path. For example, in the case of concurrent execution of heavy queries, we need to think about the case where the execution may return an unexpected result such as an empty result set, or when the execution of a query raises an exception. We need to consider that this single exception may also have an impact on other concurrent executions. This looks like a more real-life scenario (where things could go wrong because of some exceptions) and, in this chapter, we will try to also manage the errors.

Therefore, we will try to simulate a typical data-consuming application, where users’ request response controllers must execute operations as fast as possible, even in the face of a high request load.

The primary objective of this chapter is to guide you through drastically reducing the response time of your application with the help of multiple queries, concurrent execution in rendering a dashboard page, and by trying to apply Octane features in the application. We will walk through the routing, controller, models, queries, migrations, seeding, and the view template. We will involve some mechanisms provided by Octane, such as Octane Routes, chunk data loads, parallel tasks (for queries and HTTP requests), error and exception management, and Octane Cache.

In this chapter, we will cover the following:

  • Installing and setting up the application
  • Importing the initial data (and suggestions on how to make it efficiently)
  • Querying multiple pieces of data from the database in parallel
  • Optimizing the routes
  • More examples of integrating third-party APIs
  • Improving speed with Octane Cache

Technical requirements

We are going to assume that you have PHP 8.0 or greater (8.1 or 8.2) and the Composer tool. If you want to use Laravel Sail (https://laravel.com/docs/9.x/sail), you need the Docker Desktop application (https://www.docker.com/products/docker-desktop).

We will also quickly recap the setup of Octane for our practical example. So, we will install all tools needed.

The source code and the configuration files of the examples described in the current chapter are available here: https://github.com/PacktPublishing/High-Performance-with-Laravel-Octane/tree/main/octane-ch04

Installing and setting up the dashboard application

To demonstrate the power of Laravel Octane, we are going to build a dashboard page to show event data filtered in different ways. We will keep it as simple as possible to avoid focusing on business functionalities, and we will keep focusing on how to apply techniques for improving performance while keeping the application reliable and error-free.

Installing your Laravel application

As shown in Chapter 1, Understanding the Laravel Web Application Architecture, you can install the Laravel application from scratch via the Laravel command, as follows:

composer global requires laravel/installer

Once you have your Laravel command installed, you can create your application with the following command:

laravel new octane-ch04

The laravel new command creates the directory with your application, so the next step is to enter the new directory to start customizing the application:

cd octane-ch04

Adding a database

Now that we have created the application, we have to install and set up the database because our example application will need a database to store and retrieve the example data. So, to install and set up the database, we are going to do the following:

  1. Install a MySQL database server
  2. Execute migrations in Laravel (to apply schema definitions to database tables)
  3. Install an application to manage and check the tables and the data of the database

Installing the database service

There are three ways to install the database server: via the official installer, via your local package manager, or via Docker/Laravel Sail.

The first one is to use the official installer provided by MySQL. You can download and execute the installer from the official website for your specific operating system: https://dev.mysql.com/downloads/installer/.

Once you have downloaded the installer, you can execute it.

Another way is to use your system package manager. If you have macOS, my suggestion is to use Homebrew (see Chapter 1, Understanding the Laravel Web Application Architecture) and execute the following command:

brew install mysql

If you are using GNU/Linux, you can use the package manager provided by your GNU/Linux distribution. For example, for Ubuntu, you can execute the following:

sudo apt install mysql-server

If you don’t want to install or add the MySQL server to your local operating system, you can use a Docker image running in a Docker container. For that, we can use the Laravel Sail tool. If you are familiar with Docker images, using a Docker image simplifies the installation of third-party software (such as the database). Laravel Sail simplifies the process of managing Docker images.

Make sure that Laravel Sail is added to your application. In the project directory, add the Laravel Sail package to your project:

composer require laravel/sail --dev

Then, execute the new command provided by Laravel Sail to add the Sail configuration for Docker:

php artisan sail:install

The execution of the preceding command will require you to select the services you need to activate via Laravel Sail. For now, the goal is to activate the MySQL service, so select the first option. On selecting the MySQL service, the MySQL Docker image will automatically be downloaded:

Figure 4.1: Installing Laravel Sail

Figure 4.1: Installing Laravel Sail

Installing Laravel Sail, as well as downloading the MySQL Docker image, will add the docker-compose.yml file to your project directory, and the PHPUnit configuration will be changed to use the new database instance. So, installing Laravel Sail helps you with the Docker configuration (creating the docker-compose.yml file with a preset configuration based on the choices provided as answers to the questions raised by the sail:install command), and with the configuration of PHPUnit (creating the right PHPUnit configuration to use the new database instance).

The docker-compose.yml file will contain the following:

  • The main service to serve your web application
  • An additional service for the MySQL server
  • The right configuration for the services to use the same environment variables from the .env file

If you already have some services up and running on your local operating system and you want to avoid some conflicts (multiple services that use the same port), you can control some parameters used by Docker containers via docker-compose.yml, setting the following variables in the .env file:

  • VITE_PORT: This is the port used by Vite to serve the frontend part (JavaScript and CSS). The default is 5173; if you have Vite already up and running locally, you could use port 5174 to avoid conflicts.
  • APP_PORT: This is the port used by the web server. By default, the port used by the local web server is port 80, but if you already have a local web server up and running, you can use the 8080 settings (APP_PORT=8080) in the .env file.
  • FORWARD_DB_PORT: This is the port used by Laravel Sail to expose the MySQL service. By default, the port used by MySQL is 3306, but if it is already in use, you can set the port via FORWARD_DB_PORT=3307.

Once the .env configuration is good for you, you can start the Docker containers via Laravel Sail.

To start Laravel Sail and launch the Docker container, use the following command:

./vendor/bin/sail up -d

The -d option allows you to execute Laravel Sail in the background, which is useful if you want to reuse the shell to launch other commands.

To check that your database is up and running, you can execute the php artisan db:show command via sail:

./vendor/bin/sail php artisan db:show

The first time you execute the db:show command, an additional package – the Doctrine Database Abstraction Layer (DBAL) package – will be installed automatically in your Composer dependencies. The Doctrine DBAL package will add database inspection functionalities to the artisan command. Once you run the db:show command, this is what you’ll see:

Figure 4.2: Executing the db:show command via Sail

Figure 4.2: Executing the db:show command via Sail

Now your database is up and running, so you can create your tables. We are going to execute migration to create the database tables. The database tables will contain your data – for example, the events.

A migration file is a file where you can define the structure of your database table. In the migration file, you can list the columns of your table and define the type of the columns (string, integer, date, time, etc.).

Executing the migration

The Laravel framework provides out-of-the-box migrations specific to standard functionalities such as user and credential management. That’s why after installing the framework in the database/migrations directory you can find migration files already provided with the framework: the migrations to create a users table, a password resets table, a failed jobs table, and a personal access tokens table.

The migration files are stored in the database/migrations directory.

To execute the migration in the Docker container, you can execute the migrate command via the command line:

./vendor/bin/sail php artisan migrate

This is what you’ll see:

Figure 4.3: Executing migrations

Figure 4.3: Executing migrations

If you are not using Laravel Sail, and you are using the MySQL server installed in your local operating system (with Homebrew or your operating system packager or the MySQL server official installer), you can use php artisan migrate without the sail command:

php artisan migrate

The schema of the database and the tables are created thanks to the migrations. Now we can install the MySQL client to access the database.

Installing MySQL client

To access the structure and the data of the database, it is recommended that you install a MySQL client. The MySQL client allows you to access the structure, the schema, and the data and allows you to execute SQL queries to extract data.

You can choose one of the tools available; some are open source, and others are paid tools. The following shows some of the tools for managing MySQL structures and data:

If you select Sequel Ace or other tools, you have to set the right parameters during the initial connection, according to the .env file.

For example, the initial screen of Sequel Ace asks you for the hostname, the credential, the database name, and the port:

Figure 4.4: The Sequel Ace login screen

Figure 4.4: The Sequel Ace login screen

As shown in Figure 4.4, here are the values:

  • Host: 127.0.0.1
  • Username: The DB_USERNAME parameter in the .env file
  • Password: The DB_PASSWORD parameter in the .env file
  • Database: The DB_DATABASE parameter in the .env file
  • Port: The FORWARD_DB_PORT parameter if you are using Laravel Sail, or DB_PORT if you are not using a local Docker container

After installing the MySQL client, we’ll move on to talking about Sail versus the local tools.

Sail versus local tools

We looked at two methods for using PHP, services, and tools: using Docker containers (Laravel Sail) and using a local installation.

Once Sail is set up, if you want to launch commands via Sail, you have to prefix your command with ./vendor/bin/sail. For example, if you want to list the PHP modules that are installed, the following command will list all PHP modules installed on your local operating system:

php -m

If you use the php -m command with the sail tool, as shown in the following, the PHP modules installed in the Docker container will be shown:

./vendor/bin/sail php -m

The Laravel Sail image provides you with the Swoole extension already installed and configured, so now you can add Octane to your application.

Adding Octane to your application

To add Laravel Octane to your application, you have to do the following:

  1. Add the Octane package
  2. Create Octane configuration files

Information

We already covered the Octane setup with Laravel Sail and Swoole in Chapter 3, Using the Swoole Application Server. Let’s quickly recap all the steps for the Octane configuration needed by the example provided to you in the current chapter.

So, first of all, in the project directory, we are going to add the Laravel Octane package with the composer require command:

./vendor/bin/sail composer require laravel/octane

Then, we will create Octane configuration files with the octane:install command:

./vendor/bin/sail php artisan octane:install

Now that we have installed Laravel Octane, we have to configure Laravel to start the Swoole application server.

Activating Swoole as the application server

If you are using Laravel Sail, you have to activate Swoole to serve your Laravel application. The default Laravel Sail configuration launches the classical php artisan serve tool. So, the goal is to edit the configuration file where the artisan serve command is defined and replace it with the octane:start command. To do that, you have to copy the configuration file from the vendor directory to a directory where you can edit it. Laravel Sail provides you a publishing command to copy and generate the configuration file via the sail:publish command:

./vendor/bin/sail artisan sail:publish

The publish command generates the Docker directory and the supervisord.conf file. The supervisord.conf file has the responsibility of launching the web service to accept the HTTP request and generate the HTTP response. With Laravel Sail, the command that runs the web service is placed in the supervisord.conf file. Then, in the docker/8.1/supervisord.conf file (placed in the project directory), to launch Laravel Octane instead of the classical web server, replace the artisan serve command with artisan octane:start with all the correct parameters:

# command=/usr/bin/php -d variables_order=EGPCS /var/www/html/artisan serve --host=0.0.0.0 --port=80
command=/usr/bin/php -d variables_order=EGPCS /var/www/html/artisan octane:start --server=swoole --host=0.0.0.0 --port=80

With Laravel Sail, when you change any Docker configuration files, you must rebuild the images:

./vendor/bin/sail build --no-cache

Then, restart Laravel Sail:

./vendor/bin/sail stop
./vendor/bin/sail up -d

If you open your browser to http://127.0.0.1:8080/, you will see your Laravel application served by Swoole.

Verifying your configuration

Once you set up the tools and services, my suggestion is to be aware of the configuration used by the tools. With the PHP command, you have some options to check the installed module (useful to check whether a module is loaded correctly, for example, to check whether the Swoole module is loaded), and an option to see the current configuration of PHP.

To check whether a module is installed or not, you can use the PHP command with the -m option:

./vendor/bin/sail php -m

To check whether Swoole is correctly loaded, you can filter just the lines with Swoole as the name (case-insensitive). To filter the lines, you can use the grep command. The grep command shows only the lines that match specific criteria:

./vendor/bin/sail php -m | grep -i swoole

If you want to list all the PHP configurations, you can use the -i option:

./vendor/bin/sail php -i

If you want to change something in your configuration, you might want to see where the configuration (.ini) files are located. To see where the .ini files are located, filter just the ini string:

./vendor/bin/sail php -i | grep ini

You will see something like this:

Configuration File (php.ini) Path => /etc/php/8.1/cli
Loaded Configuration File => /etc/php/8.1/cli/php.ini
Scan this dir for additional .ini files => /etc/php/8.1/cli/conf.d
Additional .ini files parsed => /etc/php/8.1/cli/conf.d/10-mysqlnd.ini,

With the php -i command, you can obtain information about where the php.ini file is located. If you are using Laravel Sail, you can execute the following command:

./vendor/bin/sail php -i | grep ini

You will see that there is a specific .ini file for Swoole:

/etc/php/8.1/cli/conf.d/25-swoole.ini

If you want to access that file to check it or edit it, you can jump into the running container via the shell command:

./vendor/bin/sail shell

With this command, it will show the shell prompt of the running container, and you can show the content of the file there:

less /etc/php/8.1/cli/conf.d/25-swoole.ini

The command will show you the content of the 25-swoole.ini configuration file. The content of the file is as follows:

extension=swoole.so

If you want to disable Swoole, you can add the ; character at the beginning of the extension directive, as follows:

; extension=swoole.so

With the ; character at the beginning, the extension is not loaded.

Summarizing installation and setup

Before proceeding with implementation, let me summarize the previous steps:

  1. We installed our Laravel application.
  2. We added a database service.
  3. We configured a MySQL client to access the MySQL server.
  4. We added the Octane package and configuration.
  5. We added Swoole as the application server.
  6. We checked the configuration.

So, now we can start using some Octane functionalities such as executing heavy tasks in a parallel and async way.

Creating a dashboard application

In an application, you can have multiple kinds of data stored in multiple tables.

Typically, on the product list page, you have to retrieve a list of products by executing a query to retrieve products.

Or, in a dashboard, maybe you could show multiple charts or tables to show some data from your database. If you want to show more charts on the same page, you have to perform more than one query on more than one table.

You might execute one query at a time; this means that the total time for retrieving all the useful information for composing the dashboard is the sum of the execution times of all the queries involved.

Running more than one query at the same time would reduce the total time to retrieve all the information.

To demonstrate this, we will create an events table where we will store some events with a timestamp for the user.

Creating an events table

When you are creating a table in Laravel, you have to use a migration file. A migration file contains the logic to create the table and all fields. It contains all the instructions to define the structure of your table. To manage the logic for using the data stored in the table, you might need other things such as the model and seeder classes.

The model class allows the developer to access the data and provides some methods for saving, deleting, loading, and querying data.

The seeder class is used to fill the table with initial values or sample values.

To create the model class, the seeder class, and the migration file, you can use the make:model command with the m (create a migration file) and s (create a seeder class) parameters:

php artisan make:model Event -ms

With the make:model command and the m and s parameters, three files are created:

  • The migration file is created in database/migration/, with the name consisting of the timestamp as the prefix and create_events_table as the suffix, for example, 2022_08_22_210043_create_events_table.php
  • The model class in app/Models/Event.php
  • The seeder class file in app/database/seeders/EventSeeder.php

Customizing the migration file

The make:model command creates a template file for creating the table with basic fields such as id and timestamps. The developer must add the fields specific to the application. In the dashboard application, we are going to add these fields:

  • user_id: For the external reference with the users table, a user could be related to more events
  • type: An event type could be INFO, WARNING, or ALERT
  • description: Text containing the description of the event
  • value: An integer from 1 to 10
  • date: The event date and time

To create the table, an example of the migration file is as follows:

<?php
use AppModelsUser;
use IlluminateDatabaseMigrationsMigration;
use IlluminateDatabaseSchemaBlueprint;
use IlluminateSupportFacadesSchema;
return new class extends Migration
{
    /**
     * Run the migrations.
     *
     * @return void
     */
    public function up()
    {
        Schema::create('events',
                       function (Blueprint $table) {
            $table->id();
            $table->foreignIdFor(User::class)->index();
            $table->string('type', 30);
            $table->string('description', 250);
            $table->integer('value');
            $table->dateTime('date');
            $table->timestamps();
        });
    }
    /**
     * Reverse the migrations.
     *
     * @return void
     */
    public function down()
    {
        Schema::dropIfExists('events');
    }
};

You can list the fields you want to add to the table in the up() method. In the code, we are adding the foreign ID for the user table, the type, the description, the value, and the date. The down() method typically is used to drop the table. The up() method is called when the developer wants to execute the migrations, and the down() method is called when the developer wants to roll back the migration.

Seeding data

With the seeder file, you can create the initial data to fill the table. For testing purposes, you can fill the table with fake data. Laravel provides you with a great helper, fake(), for creating fake data.

The fake() helper

For generating fake data, the fake() helper uses the Faker library. The home page of the library is at https://fakerphp.github.io/.

Now, we are going to create fake data for users and events.

To create fake users, you can create the app/database/seeders/UserSeeder.php file.

In the example, we will do the following:

  • Generate a random name via fake()->firstName()
  • Generate a random email via fake()->email()
  • Generate a random hashed password with Hash::make(fake()->password())

We will generate 1,000 users, so we will use a for loop.

You have to generate data and call User::insert() to generate data in the run() method of the UserSeeder class:

<?php
namespace DatabaseSeeders;
use AppModelsUser;
use IlluminateDatabaseSeeder;
use IlluminateSupportFacadesHash;
class UserSeeder extends Seeder
{
    /**
     * Run the database seeds.
     *
     * @return void
     */
    public function run()
    {
        $data = [];
        $passwordEnc = Hash::make(fake()->password());
        for ($i = 0; $i < 1000; $i++) {
            $data[] =
            [
                'name' => fake()->firstName(),
                'email' => fake()->unique()->email(),
                'password' => $passwordEnc,
            ];
        }
        foreach (array_chunk($data, 100) as $chunk) {
            User::insert($chunk);
        }
    }
}

With the UserSeeder class, we are going to create 1,000 users. Then, once we have the users in the user table, we are going to create 100,000 events:

<?php
namespace DatabaseSeeders;
use AppModelsEvent;
use IlluminateDatabaseSeeder;
use IlluminateSupportArr;
class EventSeeder extends Seeder
{
    /**
     * Run the database seeds.
     *
     * @return void
     */
    public function run()
    {
        $data = [];
        for ($i = 0; $i < 100_000; $i++) {
            $data[] = [
                'user_id' => random_int(1, 1000),
                'type' => Arr::random(
                    [
                        'ALERT', 'WARNING', 'INFO',
                    ]
                ),
                'description' => fake()->realText(),
                'value' => random_int(1, 10),
                'date' => fake()->dateTimeThisYear(),
            ];
        }
        foreach (array_chunk($data, 100) as $chunk) {
            Event::insert($chunk);
        }
    }
}

To create fake events, we need to fill the event fields using the fake() helper. The fields filled for the events table are as follows:

  • user_id: We will generate a random number from 1 to 1000
  • type: We will use the Arr:random() helper from Laravel to select one of these values: 'ALERT', 'WARNING', or 'INFO'
  • description: A random text from the fake() helper
  • value: A random integer from 1 to 10
  • date: A date function provided by a fake() helper for generating a day from the current year, dateTimeThisYear()

Like we did for the users table, we are using the chunking approach to try to improve the speed of the execution of the data generator. For large arrays, the chunking approach allows the code to be more performant because it involves dividing the array into chunks and handling the chunks instead of each record individually. This reduces the number of insertions made to the database.

Improving the speed of the seed operation

Generating a lot of data requires thinking about the cost in terms of time spent on the operations.

The two most expensive operations used for data seeding (via the UserSeeder class) during the creation of the initial user data are as follows:

  • Hash::make() takes a fraction of a second because it is CPU-intensive. If you repeat this operation multiple times, in the end, it takes seconds to be executed.
  • array_chunk can help you reduce the number of calls to the insert() method. Consider that the insert() method can accept an array of items (multiple rows to insert). Using insert() with an array as an argument is much faster in the execution than calling insert() for every single row. Each insert() execution under the hood (at the database level) has to prepare the transaction operation for the insert, insert the row in the table, adjust all indexes and all metadata for the table, and close the transaction. In other words, each insert() operation has some overhead time that you have to consider when you want to call it multiple times. That means each insert() operation has an additional overhead cost to ensure that the operation is self-consistent. Reducing the number of such operations reduces the total time of the additional operations.

So, in order to improve the performance in data creation (seeding), we can make some assumptions and we can implement these approaches:

  • To create multiple users, it is fine to have the same password for all users. We don’t have to implement a sign-in process, we just need a list of users.
  • We can create an array of users, and then use the chunking approach for inserting chunks of data (for 1,000 users we insert 10 chunks of 100 users each).

So, in the previous snippet of code for creating users, we used these two kinds of optimizations: reducing the number of hash calls and using array_chunk.

In some scenarios, you have to insert and load a huge amount of data into the database. In this case, my suggestion is to load data using some specific features provided by the database, instead of trying to optimize your code.

For example, if you have a multitude of data to load and or transfer from another database, in the case of MySQL, there are two tools.

The first option is using the INTO OUTFILE option:

select * from events INTO OUTFILE '/var/lib/mysql-files/export-events.txt';

Before doing that, you have to be sure that MySQL is allowing you to perform this operation.

Because we will export a huge quantity of data in a directory, we have to list this directory as permitted in the MySQL configuration.

In the my.cnf file (the configuration file for MySQL), be sure that there is a secure-file-priv directive. The value of this directive would be a directory where you can export and import the file.

If you are using Laravel Sail, secure-file-priv is already set to a directory:

secure-file-priv=/var/lib/mysql-files

In the case of Homebrew, the configuration file is located in the following: /opt/homebrew/etc/my.cnf.

For example, the my.cnf file could have this structure:

[mysqld]
bind-address = 127.0.0.1
mysqlx-bind-address = 127.0.0.1
secure-file-priv = "/Users/roberto"

In this case, the directory for exporting data and files is "/Users/roberto":

Figure 4.5: The secure-file-priv directive of MySQL

Figure 4.5: The secure-file-priv directive of MySQL

This directive exists for security reasons, so before making this edit, make your evaluation. In the case of the production environment, I disable that directive (set as an empty string). In local development environments, this configuration could be acceptable, or at least activate this option only when you need it.

After this configuration change, you have to reload the MySQL server. In the case of Homebrew, use the following:

brew services restart mysql

Now you can execute an artisan command (php artisan db) to access the database. You don’t need to specify the database name, username, or password because the command uses the Laravel configuration (the DB_ parameters in .env):

php artisan db

In the MySQL prompt that is shown after you launched the artisan db command, you can, for example, export data using the SELECT syntax:

select * from events INTO OUTFILE '/Users/roberto/export-events.txt';

You will see that exporting thousands and thousands of records will take just a few milliseconds.

If you are using Laravel Sail, as usual, you have to launch php artisan through the sail command:

./vendor/bin/sail php artisan db

In the MySQL Docker prompt use the following:

select * from events INTO OUTFILE '/var/lib/mysql-files/export-events.txt';

If you want to load a file that previously exported my SELECT statement, you can use LOAD DATA:

LOAD DATA INFILE '/Users/roberto/export-events.txt' INTO TABLE events;

Again, you will see that this command will take a few milliseconds to import thousands and thousands of records:

Figure 4.6: With LOAD DATA, you can boost the loading data process

Figure 4.6: With LOAD DATA, you can boost the loading data process

So, in the end, you have more than one way to boost the loading data process. I suggest using LOAD DATA when you have MySQL, and you can obtain data exported via SELECT. Another scenario is when, as a developer, you receive a huge data file from someone else, and you can agree with the file format. Or, if you already know that you will have to load huge amounts of data multiple times for testing purposes, you could evaluate creating a huge file once (for example, with the fake() helper) and then use the file every time you want to seed the MySQL database.

Executing the migrations

Now, before implementing the query to retrieve data, we have to run the migration and the seeders.

So, in the previous sections, we covered how to create seeders and migration files.

To control which seeder has to be loaded and executed, you have to list the seeders in the database/seeders/DatabaseSeeder.php file, in the run() method. You have to list the seeders in this way:

        $this->call([
            UserSeeder::class,
            EventSeeder::class,
        ]);

To create tables and load data with one command, use this:

php artisan migrate --seed

If you already executed the migration and you want to recreate them from scratch, you can use migrate:refresh:

php artisan migrate:refresh --seed

Or you can use the migrate:fresh command, which drops tables instead of executing the rollback:

php artisan migrate:fresh --seed

Note

The migrate:refresh command will execute all down() functions of your migrations. Usually, in the down() method, the dropIfExists() method (for dropping the table) is called, so your table will be cleaned and your data will be lost before being created again from scratch.

Now that you have your tables and data created, we will load the data via a query from the controller. Let’s see how.

The routing mechanism

As a practical exercise, we want to build a dashboard. A dashboard collects some information from our events table. We have to run multiple queries to collect some data to render the dashboard blade view.

In the example, we will do the following:

  • Define two routes for /dashboard and /dashboard-concurrent. The first one is for sequential queries, and the second one is for concurrent queries.
  • Define a controller named DashboardController with two methods – index() (for the sequential queries) and indexConcurrent() (for the concurrent queries).
  • Define four queries: one for counting the rows in the events table, and three queries for retrieving the last five events that include a specific term in the description field (in the example we are looking for the strings that include the term something), for each event type ('INFO', 'WARNING', and 'ALERT').
  • Define a view to show the result of the queries.

Using the Octane routes

Octane provides an implementation of a routing mechanism.

The routing mechanism provided by Octane (Octane::route()) is lighter than the classic Laravel routing mechanism (Route::get()). The Octane routing mechanism is faster because it skips all the full features provided by Laravel routes such as middleware. Middleware is a way of adding functionalities when a route is invoked, but it takes time to call and manage this software layer.

To call Octane routes, you can use the Octane::route() method. The route() method has three parameters. The first parameter is the HTTP method (for example 'GET', 'POST', etc.), the second parameter is the path (such as ‘/dashboard’), and the third parameter is a function that returns the Response object.

Now that we understand the syntax differences between Route::get() and Octane::route(), we can modify the last code snippet by replacing Route::get() with Octane::route():

use LaravelOctaneFacadesOctane;
use IlluminateHttpResponse;
use AppHttpControllersDashboardController;
Octane::route('GET', '/dashboard', function() {
    return new Response(
      (new DashboardController)->index());
});
Octane::route('GET', '/dashboard-concurrent', function() {
    return new Response(
      (new DashboardController)->indexConcurrent());
});

If you want to test how much faster the Octane routing mechanism is than the Laravel routing mechanism, create two routes: the first one served by Octane, and the second one served by the Laravel route. You will see that the response is very fast because the application inherits all the benefits that come from all the Octane framework loader mechanisms, and the Octane::route also optimizes the routing part. The code creates two routes, /a and /b. The /a route is managed via the Octane routing mechanism, and the /b route is managed via the classic routing mechanism:

Octane::route('GET', '/a', function () {
    return new Response(view('welcome'));
});
Route::get('/b', function () {
    return new Response(view('welcome'));
});

If you compare the two requests by calling it via the browser and checking the response time, you will see that the /a route is faster than the /b route (on my local machine, it is 50% faster) because of Octane::route().

Now that the routes are set up, we can focus on the controller.

Creating the controller

Now we are going to create a controller, DashboardController, with two methods: index() and indexConcurrent().

In the app/Http/Controllers/ directory, create a DashboardController.php file with the following content:

<?php
namespace AppHttpControllers;
class DashboardController extends Controller
{
    public function index()
    {
        return view('welcome');
    }
    public function indexConcurrent()
    {
        return view('welcome');
    }
}

We just created the controller’s methods, so they are just loading the view. Now we are going to add some logic in the methods, creating a query in the model file and calling it from the controllers.

Creating the query

To allow the controller to load data, we are going to implement the query, the logic that retrieves data from the events table. To do that, we are going to use the query scope mechanism provided by Laravel. The query scope allows you to define the logic in the model and reuse it in your application.

The query scope we are going to implement will be placed in the scopeOfType() method in the Event model class. The scopeOfType() method allows you to extend the functionalities of the Event model and add a new method, ofType():

<?php
namespace AppModels;
use IlluminateDatabaseEloquentFactoriesHasFactory;
use IlluminateDatabaseEloquentModel;
class Event extends Model
{
    use HasFactory;
    /**
     * This is a simulation of a
     * complex query that is time-consuming
     *
     * @param  mixed  $query
     * @param  string  $type
     * @return mixed
     */
    public function scopeOfType($query, $type)
    {
        sleep(1);
        return $query->where('type', $type)
        ->where('description', 'LIKE', '%something%')
        ->orderBy('date')->limit(5);
    }
}

The Event model file is located in the app/Models directory. The file is Event.php.

The query returns the event type defined as an argument ($type) and selects the rows where the description contains the word something (through the 'LIKE' operator).

In the end, we are going to sort the data by date (orderBy) and limit it to five records (limit).

In order to highlight the benefits of the optimizations we are going to implement, I am going to add a 1-second sleep function to simulate a time-consuming operation.

The DashboardController file

Now we can open again the DashboardController file and implement the logic to call the four queries – the first one for counting the events:

Event::count();

The second one is for retrieving the events with the defined query via the ofType function for events with the 'INFO' type:

Event::ofType('INFO')->get();

The third one is for retrieving the 'WARNING' event:

Event::ofType('WARNING')->get();

The last one is for retrieving the 'ALERT' event:

Event::ofType('ALERT')->get();

Let’s put it all together in the controller index() method to call the queries sequentially:

use AppModelsEvent;
// …
public function index()
{
    $time = hrtime(true);
    $count = Event::count();
    $eventsInfo = Event::ofType('INFO')->get();
    $eventsWarning = Event::ofType('WARNING')->get();
    $eventsAlert = Event::ofType('ALERT')->get();
    $time = (hrtime(true) - $time) / 1_000_000;
    return view('dashboard.index',
        compact('count', 'eventsInfo', 'eventsWarning',
                'eventsAlert', 'time')
    );
}

The hrtime() method is used for measuring the execution time of all four queries.

Then, after all the queries are executed, the dashboard.index view is called.

Now, in the same way, we will create the indexConcurrent() method, where the queries are executed in parallel via the Octane::concurrently() method.

The Octane::concurrently() method has two parameters. The first one is the array of tasks. A task is an anonymous function. The anonymous function can return a value. The concurrently() method returns an array of values (the returned values of the task array). The second parameter is the amount of time in milliseconds that concurrently() waits for the completion of the task. If a task takes more time than the second parameter (milliseconds), the concurrently() function will raise a TaskTimeoutException exception.

The implementation of the indexConcurrent() method is located in the DashboardController class:

    public function indexConcurrent()
    {
        $time = hrtime(true);
        try {
            [$count,$eventsInfo,$eventsWarning,$eventsAlert] =
            Octane::concurrently([
                fn () => Event::count(),
                fn () => Event::ofType('INFO')->get(),
                fn () => Event::ofType('WARNING')->get(),
                fn () => Event::ofType('ALERT')->get(),
            ]);
        } catch (TaskTimeoutException $e) {
            return "Error: " . $e->getMessage();
        }
        $time = (hrtime(true) - $time) / 1_000_000;
        return view('dashboard.index',
            compact('count', 'eventsInfo', 'eventsWarning',
                    'eventsAlert', 'time')
        );
    }

To use TaskTimeoutException correctly, you have to import the class:

use LaravelOctaneExceptionsTaskTimeoutException;

The last thing you have to implement to render the pages is the view.

Rendering the view

In the controller, the last instruction of each method is returning the view:

return view('dashboard.index',
            compact('count', 'eventsInfo', 'eventsWarning',
                    'eventsAlert', 'time')
        );

The view() function loads the resources/views/dashboard/index.blade.php file (dashboard.index). To share data from the controller to the view, we are going to send some arguments to the view() function, such as $count, $eventsInfo, $eventsWarning, $eventsAlert, and $time.

The view is an HTML template that uses Blade syntax to show variables such as $count, $eventsInfo, $eventsWarning, $eventsAlert, and $time:

<x-layout>
    <div>
        Count : {{ $count }}
    </div>
    <div>
        Time : {{ $time }} milliseconds
    </div>
    @foreach ($eventsInfo as $e)
    <div>
        {{ $e->type }} ({{ $e->date }}): {{ $e->description }}
    </div>
    @endforeach
    @foreach ($eventsWarning as $e)
    <div>
        {{ $e->type }} ({{ $e->date }}): {{ $e->description }}
    </div>
    @endforeach
    @foreach ($eventsAlert as $e)
    <div>
        {{ $e->type }} ({{ $e->date }}): {{ $e->description }}
    </div>
    @endforeach
</x-layout>

The view inherits the layout (via the x-layout directive) so you can create the resources/views/components/layout.blade.php file:

<html>
    <head>
        <title>{{ $title ?? 'Laravel Octane Example' }}
        </title>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width,
          initial-scale=1.0">
    </head>
    <body>
        <h1>Laravel Octane Example</h1>
        <hr/>
        {{ $slot }}
    </body>
</html>

Now you have the data in your database, the query in the model class, and the controller that loads the data via the model and sends data to the view, and the view template file.

We also have two routes: the first one is /dashboard with sequential queries, and the second one is /dashboard-concurrent with parallel queries.

Just for this example, the query is forced to take 1 second (in the model method).

If you open your browser at http://127.0.0.1:8000/dashboard, you will see that each request takes more than 3 seconds (each query takes 1 second). This is the sum of all the execution times of each query.

If you open your browser at http://127.0.0.1:8000/dashboard-concurrent, you will see that each request takes 1 second to be executed. This is the maximum execution time of the most expensive query.

This means that you have to call multiple queries in your controller to retrieve data. To render the page, you can use the Octane::concurrently() method.

The Octane::concurrently() method is also great in other scenarios (not just loading data from a database), such as making concurrent HTTP requests. So, in the next section, we are going to use the Octane::concurrently() method to retrieve data from HTTP calls (instead of retrieving data from a database). Let’s see how.

Making parallel HTTP requests

Think about the scenario in which you have to add a new web page in your application, and to render the web page, you have to call more than one API because you need multiple pieces of data from multiple sources (list of products, list of news, list of links, etc.) for the same page. In the scenario with one web page that needs data from multiple API calls, you could perform the HTTP requests simultaneously to reduce the response time of the page.

For this example, to simplify the explanation, we will avoid using the controller and the view. We are going to collect JSON responses from APIs and then we will merge the responses into one JSON response. The important aspect to focus on is the mechanism of calling HTTP requests to third-party HTTP services because our goal is to understand how to make the HTTPS call concurrently.

To simulate the HTTP service, we are going to create two new routes:

  • api/sentence: An API endpoint that replies with a JSON with a random sentence
  • api/name: An API endpoint that replies with a JSON with a random first name

Both endpoint APIs implement a sleep() function of 1 second to allow the client (who calls the endpoint) to wait for the answer. This is a way to simulate a slow API and see the benefit we can obtain from parallel HTTP requests.

In the routes/web.php file, you can add the two routes that implement the APIs:

Octane::route('GET', '/api/sentence', function () {
    sleep(1);
    return response()->json([
        'text' => fake()->sentence()
    ]);
});
Octane::route('GET', '/api/name', function () {
    sleep(1);
    return response()->json([
        'name' => fake()->name()
    ]);
});

Now, using the Http::get() method to perform HTTP requests, you can implement the logic to retrieve data from two APIs sequentially:

Octane::route('GET', '/httpcall/sequence', function () {
    $time = hrtime(true);
    $sentenceJson =
      Http::get('http://127.0.0.1:8000/api/sentence')->
      json();
    $nameJson =
      Http::get('http://127.0.0.1:8000/api/name')->json();
    $time = hrtime(true) - $time;
    return response()->json(
        array_merge(
            $sentenceJson,
            $nameJson,
            ["time_ms" => $time / 1_000_000]
        )
        );
});

Using Octane::concurrently(), you can now call the two Http::get() methods, using the HTTP request as Closure (anonymous function), as we did for the database queries:

Octane::route('GET', '/httpcall/parallel', function () {
    $time = hrtime(true);
    [$sentenceJson, $nameJson] = Octane::concurrently([
        fn() =>
          Http::get('http://127.0.0.1:8000/api/sentence')->
          json(),
        fn() =>
          Http::get('http://127.0.0.1:8000/api/sequence')->
          json()
    ]
    );
    $time = hrtime(true) - $time;
    return response()->json(
        array_merge(
            $sentenceJson,
            $nameJson,
            ["time_ms" => $time / 1_000_000]
        )
        );
});

If you open your browser to http://127.0.0.1:8000/httpcall/sequence, you will see that the response time is more than 2,000 milliseconds (the sum of the execution time of the two sleep functions, and some milliseconds for executing the HTTP connection).

If you open your browser to http://127.0.0.1:8000/httpcall/parallel, you will see that the response takes more than 1,000 milliseconds (the two HTTP requests are performed in parallel).

Using Octane::concurrently() could help you save some total response time when making these examples with database queries or fetching external resources.

Managing HTTP errors

While executing HTTP calls in parallel, you have to expect that, sometimes, the external service could answer with an error (for example, with an HTTP status code 500). For better error management in the source code, we must also properly deal with the case where we get an empty response from the API, which typically results in a response with errors (for example the API returns a status code 500).

Here, we demonstrate that we are going to implement an API that returns 500 as an HTTP status code (an internal server error message):

Octane::route('GET', '/api/error', function () {
    return response(
        status: 500
    );
});

Then, we can call the API error route in one of our concurrent HTTP calls. If we are not managing the error, we will receive an error such as this one:

Figure 4.7: The unmanaged error in the browser

Figure 4.7: The unmanaged error in the browser

So, we could improve our code by managing the following:

  • The exception that could come from the execution of concurrent HTTP calls
  • The empty response value with the Null coalescing operator
  • Initializing the arrays as an empty array

In the routes/web.php file, we can improve the API calls and make them more reliable:

Route::get('/httpcall/parallel-witherror', function () {
    $time = hrtime(true);
    $sentenceJson = [];
    $nameJson = [];
    try {
        [$sentenceJson, $nameJson] = Octane::concurrently([
            fn () => Http::get(
              'http://127.0.0.1:8000/api/sentence')->json()
              ?? [],
            fn () => Http::get(
              'http://127.0.0.1:8000/api/error')->json() ??
              [],
        ]
        );
    } catch (Exception $e) {
        // The error: $e->getMessage();
    }
    $time = hrtime(true) - $time;
    return response()->json(
        array_merge(
            $sentenceJson,
            $nameJson,
            ['time_ms' => $time / 1_000_000]
        )
    );
});

In this way, if an exception is raised or we receive an HTTP error as a response, our software will manage these scenarios.

The suggestion is that even if you are focusing on performance aspects, you don’t have to lose focus on the behavior of the application and managing the unhappy paths correctly.

Now that we understand how to execute tasks in parallel, we can focus on caching the response to avoid calling external resources (database or web service) for every request.

Understanding the caching mechanism

Laravel provides the developer with a strong mechanism for caching.

The caching mechanism can be used with a provider chosen from the database, Memcached, Redis, or DynamoDB.

Laravel’s caching mechanism allows data to be stored for later retrieval quickly and efficiently.

This is very useful in cases where retrieving data from an external service with a database or web service can be a time-consuming operation. After information retrieval, storing the retrieved information in a cache mechanism is possible to make future information retrieval easier and faster.

So basically, a caching mechanism exposes two basic functionalities: caching of information and retrieval from the cache of information.

To properly retrieve information each time a cached item is used, it is appropriate to use a storage key. This way, it is possible to cache a lot of information identified by a specific key.

Laravel’s caching mechanism, through the special remember() function, allows retrieving a piece of information tied to a specific key. If this information has become obsolete because the storage time-to-live has been exceeded, or if the key is not cached, then the remember() method allows calling an anonymous function that has the task of getting the data from the external resource, which can be the database or a web service. Once the original data is retrieved, the remember() function automatically returns the data but, at the same time, also takes care of caching it with the user-defined key.

Here is an example of using the remember() function:

use IlluminateSupportFacadesCache;
$secondsTimeToLive = 5;
$cacheKey= 'cache-key';
$value = Cache::remember($cacheKey, $secondsTimeToLive, function () {
    return Http::get('http://127.0.0.1:8000/api/sentence')
      ->json() ?? [];
});

The remember() functionality applied to each HTTP request in the previous example can be implemented in an anonymous function:

$getHttpCached = function ($url) {
        $data = Cache::store('octane')->remember(
                'key-'.$url, 20, function () use ($url) {
            return Http::get(
              'http://127.0.0.1:8000/api/'.$url)->json() ??
              [];
        });
        return $data;
    };

The anonymous function can then be invoked by the Octane::concurrently() function for each concurrent task:

[$sentenceJson, $nameJson] = Octane::concurrently([
            fn () => $getHttpCached('sentence'),
            fn () => $getHttpCached('name'),
        ]
        );

So, the final code in a route in the routes/web.php file is as follows:

Octane::route('GET','/httpcall/parallel-caching', function () {
    $getHttpCached = function ($url) {
        $data = Cache::store('octane')->remember(
                'key-'.$url, 20, function () use ($url) {
            return Http::get(
              'http://127.0.0.1:8000/api/'.$url)->json() ??
              [];
        });
        return $data;
    };
    $time = hrtime(true);
    $sentenceJson = [];
    $nameJson = [];
    try {
        [$sentenceJson, $nameJson] = Octane::concurrently([
            fn () => $getHttpCached('sentence'),
            fn () => $getHttpCached('name'),
        ]
        );
    } catch (Exception $e) {
        // The error: $e->getMessage();
    }
    $time = hrtime(true) - $time;
    return response()->json(
        array_merge(
            $sentenceJson,
            $nameJson,
            ['time_ms' => $time / 1_000_000]
        )
    );
});

The following are some considerations about the code:

  • We used the Octane route (faster than Laravel routes).
  • The $url parameter of the anonymous function is used to create the cache key and to call the right API via Http::get().
  • We used the cache with Octane as the driver, Cache::store('octane').
  • We used the remember() function for the cache.
  • We set the time-to-live of the cache item at 20 seconds. It means that after 20 seconds, the cache item is generated, and the code provided by the anonymous function will be called.

This code improves the response time dramatically thanks to the cache.

However, the code could be more optimized.

We cache the result from each HTTP request. But, we could cache the result provided by Octane::concurrently. So, instead of caching each HTTP request, we could cache the result that comes from Octane::concurrently(). This allows us to save more time by avoiding the execution of Octane::concurrently() if the value is cached.

In this case, we can move Octane::concurrently() in the body of the anonymous function called by remember():

Octane::route('GET', '/httpcall/caching', function () {
    $time = hrtime(true);
    $sentenceJson = [];
    $nameJson = [];
    try {
        [$sentenceJson, $nameJson] =
        Cache::store('octane')->remember('key-checking',
                                          20, function () {
            return Octane::concurrently([
                fn () => Http::get(
                  'http://127.0.0.1:8000/api/sentence')->
                  json(),
                fn () => Http::get(
                 'http://127.0.0.1:8000/api/name')->json(),
            ]);
        });
    } catch (Exception $e) {
        // The error: $e->getMessage();
    }
    $time = hrtime(true) - $time;
    return response()->json(
        array_merge(
            $sentenceJson,
            $nameJson,
            ['time_ms' => $time / 1_000_000]
        )
    );
});

In this case, from the log of the requests, you can see that the APIs are only called the first time, then the data is retrieved from the cache, and the execution time is reduced:

  200    GET /api/sentence ........ 18.57 mb 17.36 ms
  200    GET /api/name ............ 18.57 mb 17.36 ms
  200    GET /httpcall/caching .... 17.43 mb 59.82 ms
  200    GET /httpcall/caching ..... 17.64 mb 3.38 ms
  200    GET /httpcall/caching ..... 17.64 mb 2.36 ms
  200    GET /httpcall/caching ..... 17.64 mb 3.80 ms
  200    GET /httpcall/caching ..... 17.64 mb 3.30 ms

The first call to the caching route takes around 60 milliseconds; the subsequent requests are much faster (around 3 milliseconds)

If you try to do the same test by calling the HTTP requests sequentially and not using the cache, you will see higher values as response times. You will also see that the API will be called every time, making the speed and the reliability of your application dependent on a third-party system because the reliability and the speed depend on the way the third-party system (that provides the APIs) creates the response.

For example, by calling HTTP requests sequentially, with no cache – even if the APIs are provided by Octane (so in a faster way) – you will obtain the following:

  200    GET /api/sentence ........ 18.57 mb 15.22 ms
  200    GET /api/name ............. 18.68 mb 0.64 ms
  200    GET /httpcall/sequence ... 18.79 mb 60.81 ms
  200    GET /api/sentence ......... 18.69 mb 3.26 ms
  200    GET /api/name ............. 18.69 mb 1.68 ms
  200    GET /httpcall/sequence ... 18.94 mb 15.55 ms
  200    GET /api/sentence ......... 18.70 mb 1.30 ms
  200    GET /api/name ............. 18.70 mb 1.09 ms
  200    GET /httpcall/sequence .... 18.97 mb 9.52 ms
  200    GET /api/sentence ......... 18.71 mb 1.32 ms
  200    GET /api/name ............. 18.71 mb 1.05 ms
  200    GET /httpcall/sequence .... 19.00 mb 9.28 ms

While you might think that this is not a great improvement or that these values are machine-dependent, a small improvement (our response time has gone from 10-15 milliseconds to 2-3 milliseconds) for a single request could have a big impact, especially if, in a production environment, you have a huge number of simultaneous requests. The benefit of each small improvement for a single request is multiplied by the number of requests you might have in a production environment with many concurrent users.

Now that we understand a bit more about caching, we could refactor our dashboard by adding the cache for event retrieval.

Refactoring the dashboard

We are going to create a new route, /dashboard-concurrent-cached, with the Octane route and we are going to call a new DashboardController method, indexConcurrentCached():

// Importing Octane class
use LaravelOctaneFacadesOctane;
// Importing Response class
use IlluminateHttpResponse;
// Importing the DashboardController class
use AppHttpControllersDashboardController;
Octane::route('GET', '/dashboard-concurrent-cached', function () {
    return new Response((new DashboardController)->
     indexConcurrentCached());
});

In the controller app/Http/Controllers/DashboardController.php file, you can add the new method:

public function indexConcurrentCached()
{
    $time = hrtime(true);
    try {
        [$count,$eventsInfo,$eventsWarning,$eventsAlert] =
        Cache::store('octane')->remember(
            key: 'key-event-cache',
            ttl: 20,
            callback: function () {
                return Octane::concurrently([
                    fn () => Event::count(),
                    fn () => Event::ofType('INFO')->get(),
                    fn () => Event::ofType('WARNING')->
                             get(),
                    fn () => Event::ofType('ALERT')->get(),
                ]);
            }
        );
    } catch (Exception $e) {
        return 'Error: '.$e->getMessage();
    }
    $time = (hrtime(true) - $time) / 1_000_000;
    return view('dashboard.index',
        compact('count', 'eventsInfo', 'eventsWarning',
                'eventsAlert', 'time')
    );
}

In the new method, we do the following:

  • Call the remember() method to store the values in the cache
  • Execute Octane:concurrently to parallelize the queries
  • Use 'key-event-cache' as the key name for the cache item
  • Use 20 seconds as the cache time-to-live (after 20 seconds, the queries will be executed and retrieve new values from the database)
  • Use the same query of the /dashboard route and the same blade view (to make a good comparison)

Now, you can restart your Octane worker with php artisan octane:reload if you are not using the automatic reloader (as explained in Chapter 2, Configuring the RoadRunner Application Server), and then access the following:

  • http://127.0.0.1:8000/dashboard to load the page with sequential queries and without a caching mechanism
  • http://127.0.0.1:8000/dashboard-concurrent-cached to load the page with parallel queries and with a caching mechanism

Now that we have implemented the logic and opened the pages, we are going to analyze the result.

The result

The result that you can see is impressive as, from a response that took more than 200 milliseconds, you will now have a response that takes 3 or 4 milliseconds.

The longer response is from /dashboard, where sequential queries are implemented without a cache. The fastest responses come from /dashboard-concurrent-cached, which uses Octane::concurrently() to execute the queries, and the result is cached for 20 seconds:

  200    GET /dashboard ...... 19.15 mb 261.34 ms
  200    GET /dashboard ...... 19.36 mb 218.45 ms
  200    GET /dashboard ...... 19.36 mb 223.23 ms
  200    GET /dashboard ...... 19.36 mb 222.72 ms
  200    GET /dashboard-concurrent-cached .............................. 19.80 mb 112.64 ms
  200    GET /dashboard-concurrent-cached ................................ 19.81 mb 3.93 ms
  200    GET /dashboard-concurrent-cached ................................ 19.81 mb 3.69 ms
  200    GET /dashboard-concurrent-cached ................................ 19.81 mb 4.28 ms
  200    GET /dashboard-concurrent-cached ................................ 19.81 mb 4.62 ms

When you are caching data in Octane Cache, you should also be aware of the cache configuration. A wrong configuration could raise some errors in your application.

The cache configuration

A typical exception that you might see when you start to use Octane Cache in a real scenario is something like this:

Value [a:4:{i:0;i:100000;i:...] is too large for [value] column

The solution to the error message above is to change the cache configuration by increasing the number of bytes allocated for storing the cache values. In the config/octane.php file, you can configure the cache for the number of rows and the number of bytes allocated for the cache.

By default, the configuration is as follows:

    'cache' => [
        'rows' => 1000,
        'bytes' => 10000,
    ],

If you get the Value is too large exception in your browser, you might have to increase the number of bytes in the config/octane.php file:

    'cache' => [
        'rows' => 1000,
        'bytes' => 100000,
    ],

So now, using Octane features, you can improve the response time and some aspects of your application.

Summary

In this chapter, we built a very simple application that allowed us to cover multiple aspects of building a Laravel application, such as importing the initial data, optimizing the routing mechanism, integrating third-party data via HTTP requests, and using a cache mechanism via Octane Cache. We also used some Laravel Octane features in order to reduce the page loading response time thanks to the following:

  • Octane::route for optimizing the routing resolution process
  • Octane::concurrently for optimizing and starting parallel tasks
  • Octane Cache for adding a cache based on Swoole to our application

We learned how to execute queries and API calls concurrently and use the cache mechanism for reusing the content across the requests.

In the next chapter, we will take a look at some other aspects of performance that are not strictly provided by Octane but can affect your Octane optimization process.

We will also apply a different strategy for caching using the scheduled tasks provided by Octane and other optimizations.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset