The app was built using Laravel which, out of the box, provides functionality to make the display of localised content a breeze.
Simply define the translations in either JSON or PHP key => value syntax, and wrap the content in one of the various different localisation helper functions. Then, set the user’s locale in configuration and hey presto content is being served in the desired locale.
However, what happens when the translations need to be updated the on-the-fly?
Laravel’s ‘out-of-the-box solution’ is great in a project that lives in git and where content is provided by the client and updated manually by the developer.
In this scenario, the language files get updated, changes are committed and deployed and the changes are live.
However, once you get into the realms of exposing an interface that updates the language files in real time, any changes made are no longer tracked in git.
If the server were to fail catastrophically, all of the changes made since the last push would be lost forever and that wouldn’t make us very popular.
Moreover, what if the application were deployed to multiple servers?
Only the server that happens to win the lottery, so to speak, and serve the request when the user clicked save would have the changes reflected.
In that scenario, different content would be served between requests. Again, no popularity contests would be won if that solution were to make it into production.
A Single Source of Truth
One solution to this issue is to store the translations in a central location that becomes the single point of truth.
Usually, it makes sense for this location to be the database. However, this could end up in a lot of expensive database queries if having to look up every translation on page load. Obviously caching would likely be used to help reduce this overhead.
Perhaps the bigger issue is Laravel does not support a database-driven solution to localisation. Therefore, a whole lot of work would need to be done to make it possible.
Another option would be to use a central file store. This is how I decided to tackle the problem, leveraging Amazon s3 and the fantastic suite of AWS CLI tools in the process.
Solving the Problem
I started by building an interface that allowed the administrator to update the content of the language files. Below are the steps I wanted to achieve upon the save button being clicked.
- Administrator clicks save.
- Language files on the server handling the request are updated.
- Updated files are synchronised to Amazon s3.
- A scheduled task runs on application servers looking for changes in s3.
- Changes are synchronised to the additional application servers.
Amazon’s s3 was the obvious choice for storing the language files and is blessed with the added bonus of redundancy. Win! On top of that, Amazon provide an excellent suite of tools which makes interactions between EC2 instances and s3 an absolute breeze. Enter AWS CLI tools.
Reading through the documentation, I came across the sync method. This is an extremely flexible utility. Supply it with a local and remote s3 path and it will sync one way, swap the parameters around and it will do the opposite. This coupled with the seemingly infinite number of options makes the sky the limit.
To get started, I installed the CLI tool on all of the application servers:
$ pip install awscli --upgrade --user
Simple.
Note if you are following along, you will need to think about permissions, but given the plethora of ways to skin that particular cat, I’ll leave that up to you.
Next came the syncing of content from the application to s3. The sync method is what I needed, but with the addition of the —-delete
option.
With this, if any files were deleted from the application, they would also be deleted from s3 during the synchronisation process.
In PHP, it looks like this
1$command = "/.local/bin/aws s3 sync /path/to/language/files s3://remote-bucket-name --delete";2exec($command, $out);
Great, I now had the files syncing one way.
In a single server setup, you can probably stop here, safe in the knowledge that your language files are safely backed up.
I was working on a multi-server setup so needed to think about how to reflect the changes to the other instances.
This turned out to be simple. All I had to do was set up a cron job on each of the instances which ran the same command as the sync to s3, but the opposite way around.
*/1 * * * * /usr/bin/aws s3 sync s3://remote-bucket-name /path/to/language/files --delete
This command runs every minute and synchronises everything from the s3 bucket to the application server whilst deleting any files that exist in the destination, but not in the source.
Wrapping Up
I am aware that this solution is open to race conditions whereby if several people were to update the content at the same time, there is a good chance that some of those changes would never be reflected.
However, in this particular scenario, the content will be updated rarely and by a single person. With the time allocated to the task, this was a really efficient and neat solution.