Categories
#DEV

Simple way to check if CRON is working!

Recently, we moved from hosting our app on Amazon EC2 instances to Elastic Beanstalk Service. For those who had a go at hosting their apps on Elastic Beanstalk, you would know the struggles with .ebextensions. There are far more documentation and examples available now than when it was first released into the market. One of the most challenging things to do in Beanstalk is running Cron. The recommended way of doing it is via the creation of worker environment that processes long-running workloads on demand or performs tasks on a schedule. It makes use of other Amazon services like SQS queuing.

This is okay for some people who have genuinely long-running processes but our PHP application was pretty straightforward. Most of the CRON jobs would be completed in seconds. For us, running a separate worker instance with all its resources was pointless and a waste of resources. So we need to come up with another way. The other way of running CRON is using something called “leader_only” declarations in the .ebextension file. By using that, you are asking Beanstalk to run the CRON Jobs only on the first instance. This avoids multiple instances running the same CRON jobs because that would really screw things up.

Anyway, we need to get the YAML right and there is a neat little utility online that lets you validate your files. It’s called YAMLlint.com. Check it out. To see if your CRON is running as it should, here is a neat little way to test it. Obviously, the whole point of using Beanstalk is to avoid having to manually edit code on single instance so it takes away the need to setup FTP or sFTP. I suggest you do set it up for your first instance so you can check if the CRON is running and if everything is deploying as it should. There is no harm done in double checking right 😉

1. Edit your CRONTAB or in Beanstalk’s case, add it to the .ebextension folder.

$ crontab -e

2. Add the following line inside your CRON which basically will append the current date to a log file every minute. The 6 fields of the crontab file are minute, hour, the day of the month, month, the day of the week, and the actual command.

* * * * * /bin/date >> /tmp/cron_output

3. Exit the editor and you should see an output something similar to this.

crontab: installing new crontab

4. Check if the CRON is running every minute as it should through running this command which grabs the output of the file being stored in /tmp directory.

tail -f /tmp/cron_output

You should see something similar to this as the output.

Mon Apr 23 00:01:01 PDT 2018
Mon Apr 23 00:02:01 PDT 2018
Mon Apr 23 00:03:01 PDT 2018
Mon Apr 23 00:04:01 PDT 2018
...

If you don’t see it regularly running every minute, then you know something is not right with the CRON service. If you don’t see the file at all, then you know the CRON is not running. This narrows down things down to what is working and what isn’t. Saves you some hours on figuring out whether its Elastic Beanstalk file playing around or your actual CRON commands or something else. Hope this helps 🙂

I will be writing the way we setup CRON in the future blog post. Watch the space!

Categories
#DEV

My love affair with Docker

Last few days has been the worst for our business and part of it is to do with a much-hated hosting provider – OVH. Some devs like it, and some don’t! If you read the reviews about this hosting, you’d probably find a lot of bad things said about them and their network than good ones. The only reason we prefer OVH over AWS (which we do use for most of our production apps) is to do a lot with their no questions asked policy for IPv4 addresses.

I am sure most of you know we have a shortage of IPv4 addresses. It’s been in the news, and nearly 80% of the people who heard the news probably had absolutely no clue about what was going on in the computer world. Anyway, I won’t go into explaining that for the newbies to this comp world. That would deviate me from what I want to talk about in this post. Hopefully, this will act as a guide to those who are facing similar issues as us. There is another good reason we go with OVH. They are damn cheap. Two of the basic dedicated public cloud instances cost us $50 or so to run every month. I am pretty sure Amazon can’t beat that on a month-to-month contract. They probably can beat that on a three-year lease but not a monthly contract.

Anyway, my love affair with Docker started with issues developing on our traditional OVH dedicated instance. We had all kinds of troubles. The containers we were running were close to 160 on a 32GB v2 configuration – Ubuntu 14.04 LTS. This is not too bad given that Docker shares memory across all the containers. As soon as I configured more than 160 containers, all hell broke loose. We received a whole lot of errors and the IPs that were configured stopped working. This was probably the most frustrating moment of the whole experience because there are no real guidelines on optimising memory usage for Docker. You just got to have more memory if you want to have lots of containers. Here are a couple of things to help you run a lot of

Anyway, Here are a couple of things to help you run a lot more smoothly and hopefully resolve a lot of those errors. They are in no particular order. We pretty much tried all of them out, and they work flawlessly on the virtual instances we were running. Now, I am not sure what your purpose for running Docker containers is so. So please use these commands with caution. If you are taking a step back from executing these, I’d say consult your developer or someone who knows what they are doing (Docker Expert).

1) Stop all Docker Containers

docker stop $(docker ps -a -q)

2) Remove all Docker Containers

docker rm $(docker ps -a -q)

3) Remove any volumes that are unused.

docker volume rm $(docker volume ls -qf dangling=true)

4) Remove problematic networks

docker network rm(docker network ls -q)

5) Find out if any of the processes are still occupying a port

lsof -nP | grep LISTEN

Then you’d get an output similar to this…

Dropbox             384  IPv4 0x82c      TCP 127.0.0.1:17600 (LISTEN)
com.docker.slirp   6218  IPv4 0x82c      TCP *:5432 (LISTEN) <<<MOSTLY THE PROBLEM
Python             6268  IPv4 0x82c      TCP 127.0.0.1:51617 (LISTEN)

Now, just kill it…

kill -9 6218

6) Find “docker.service” file and see it to this (Helps with starting up lots of containers)

TasksMax=infinity

7) One of the Docker Limitations includes running out of keys and all kinds of stuff. Use these commands to overcome those issues (Adjust the numbers as you see fit)

echo 4194304 > /proc/sys/kernel/pid_max
echo "20000000" > /proc/sys/kernel/keys/root_maxbytes
echo "20000000" > /proc/sys/kernel/keys/maxbytes
echo "1000000" > /proc/sys/kernel/keys/root_maxkeys
echo "1000000" > /proc/sys/kernel/keys/maxkeys

8) Docker Clean Up (because it does get dirty and its not good at cleaning itself)

docker ps --filter status=dead --filter status=exited -aq \
  | xargs docker rm -v

9) Another Docker Command to Clean things (Helps with high disk space usage)

apt-get autoclean
apt-get autoremove

 

Some other things that help would be cleaning up unused images. You can find command for it online. Ask your best friend Google. Always remember, measure the amount of RAM you would need by the footprint of your container. If your container has a footprint of 1MB, 10k containers would cost you 10GB memory. Compare that with having 100MB footprint; you would need 1TB memory. That’s a lot. If you are looking into starting up quite a lot of containers, this article is quite good (Docker insane scale on IBM Power Systems). It talks about the limitations of Docker when you want to start up lots of containers. We found this guide quite helpful.

I am in love with Docker. I have to say…it was love at first sight. It’s so Awesome! It’s useful for a lot of things, but I don’t know how much longer we’ll stay together because technology is emerging at a very fast pace. Let’s hope Docker advances in which case; it’ll be until death do us part. If not, then…yeah. I’d rather not talk about that. Here

Here are a couple of things I currently love about Docker. Docker has everything in containers and I love containers. Since 2013, the eco-system has contributed nearly 100,000 public images on Docker Hub. Love…love…love.

#1 Docker has everything in containers and I love containers. Since 2013, the eco-system has contributed nearly 100,000 public images on Docker Hub. Love, love, love.

#2 Developers love Docker and Docker loves them back. Docker provides full life cycle control and that’s important for any system architecture. It works flawlessly on practically anything. So when you wake up at 2AM in the morning for troubleshooting, you know you can switch on your laptop, run the image, and start troubleshooting the script that went bad. There are lots of other reasons why developers dig this.

#3 I have hired and spoken to lots of system developers and they love Docker. I ask them to install or configure anything, they love hitting up the Docker Hub and look for images they could use. Why? It saves them time and more so, a lot of headache with incompatibilities. So when newer technologies emerge, you can easily try them on and put them into production without having to worry about where it works and where it does. You don’t even need to worry about breaking links or dependencies.

Good Luck!

Categories
#DEV Tutorials Useful & Productive

Magento Slow Backend but A Fast Frontend

Past two days has been a nightmare. We recently migrated all of our websites to Amazon Web Services (AWS), and the speed has been good. We love it. The infrastructure is excellent and so is the service we’re getting. I wouldn’t have a lot to say about their support, though. Unless you are a reasonably big enterprise which is spending a lot of dollars, you can’t afford their support packages. What I suggest from my personal experience is to subscribe to their developer support. If you get into issues relating to operating websites on their servers, they usually point you in the right direction. You will get a response generally within 24 hours which is ok.

The reason why I am writing this post is not to address that. It’s actually due to our experience with Magento. Over the past two days, I have learned so much about Magento E-Commerce Platform. One of our client who runs one of the biggest online pharmacies in New Zealand – YourChemist.co.nz hosts with us. The database is big, and so are the files. Migrating to AWS took a while, but we got there eventually. Since this website was so busy all through the day, the only time we could migrate had to be at midnight when it has the least amount of site traffic.

After migrating, we started noticing a significant problem. The speed of Magento’s backend or as some would address it as admin panel was terrible. So I did my little research on tackling this issue.