Categories
#DEV

You do not have the SUPER Privilege and Binary Logging is Enabled

Recently while trying to upgrade our project management app, I encountered this error, “You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)”.

What we initially thought was the lack of TRIGGER privileges for that MySQL user. But even after giving the MySQL user all necessary privileges, we continued to experience that error during a backup process. By the way, you would need to specify the SUPER privileges for the user running the import into the database along with the CREATE ROUTINE, ALTER ROUTINE, CREATE TRIGGER, ALTER TRIGGER, CREATE FUNCTION and ALTER FUNCTION privileges. We use Amazon RDS for storing and managing data across all our websites and as many of you might be aware, Amazon isn’t keen on giving super privileges to users of their system.

To fix this issue, here is the solution that worked for us…

1) Open the Amazon RDS Console.
2) Go to the “Parameter Groups”
3) Create a New Parameter Group (You can add to existing custom parameter group if you got one). On the dialog, select the MySQL family compatible to your MySQL database version, give it a name and confirm.
4) Click on “Edit Parameters”
5) Look for the parameter “log_bin_trust_function_creators” and set its value to ‘1’
6) Click on “Save Changes”
7) Open “Instances” and Expand your Desired MySQL Instance
8) Click on “Instance Action” and Select “Modify”
9) Select the Parameter Group and enable “Apply Immediately”.
10) Click on “Continue” and Confirm Changes

It’s best to reboot for changes to reflect. Select the instance and reboot your MySQL instance. That should do the job. For those of you on traditional MySQL environment, you can specify the log_bin_trust_function_creators option in two ways.

1) Specify on Server Start as this “–log-bin-trust-function-creators=1”
2) By setting it to “1” using a Global Statement

mysql> SET GLOBAL log_bin_trust_function_creators = 1;

Alternatively, If you are not planning to use your MySQL server for the replication consider turning the binary logging off by removing the option –log-bin from the command options for the mysqld utility starting that MySQL server. Hope that helps developers experiencing similar issues with importing and exporting SQL or should I say, while creating a dump 😉

Categories
#DEV

Solve Nginx+PHP-FPM Access Denied Issue

Recently one of our services went down due to an Apache upgrade that caused Htaccess to have contents that are no longer supported. When we go to our app, it throws out an “Access Denied” error. Our developers looked into all sorts of possible scenarios including

  1. IonCube Issues
  2. File Permission Issues
  3. Owner/User Group Issues
  4. Directory Index Issue

After a quick research, a lot of people who experienced issues with this suggested solution that involves modifying the PHP-FPM configuration file. Apparently, if the security.limit_extensions directive is set to specific file types, it won’t parse PHP in other file types causing an error such as “Access Denied”.

So we set the security.limit_extensions directive to parse .html files as well and restarted the PHP-FPM. Unfortunately, this didn’t solve the problem for us. We had to modify our Htaccess file to contain a “?” which fixed the syntax issue and thereby fixing the issue we were facing with “Access Denied” page.

Here’s how we did it…

Our original Htaccess File Contents:

<IfModule mod_rewrite.c>
  RewriteEngine on
  RewriteCond $1 !^(index\.php|images|robots\.txt)
  RewriteRule ^(.*)$ ./index.php/$1 [L]
</IfModule>

This worked fine until the Apache was upgraded. This following RewriteRule syntax is no longer supported.

RewriteRule ^(.*)$ ./index.php/$1 [L]

The corrected RewriteRule is:

RewriteRule ^(.*)$ ./index.php?/$1 [L]

The difference is in the “?” mark after the index.php

That fixed the issue for us. After upgrading the Apache, it is always good to check if all the sites are working and if your Htaccess file contains correct syntax. Hope it helps someone out there having similar issues.

Categories
#DEV

Clean up Magento CE Database Manually

I take no responsibility for this code. If you need it for Magento EE, please see this link. I highly recommend you take a backup of your database first. DO NOT execute these SQL statements on production database directly. Alternatively, use Magento Database Repair Tool.

--
-- Magento CE database clean-up
--
-- This will clean tables of junk and unnecessary stuff.
-- A full reindex is needed in Magento after cleaning up.
--
-- @author      Constantin Bejenaru <[email protected]>
-- @copyright   Copyright (c) Constantin Bejenaru (http://frozenminds.com/)
-- @license     http://www.opensource.org/licenses/mit-license.html  MIT License
--

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;



-- FLAT CATALOG (EDIT TABLE NAMES)
TRUNCATE `catalog_category_flat_store_1`;
TRUNCATE `catalog_category_flat_store_2`;

TRUNCATE `catalog_product_flat_1`;
TRUNCATE `catalog_product_flat_2`;


-- DO NOT EDIT BELOW, UNLESS YOU KNOW WHAT YOU ARE DOING

-- Logs
TRUNCATE `log_customer`;
TRUNCATE `log_quote`;
TRUNCATE `log_summary`;
TRUNCATE `log_summary_type`;
TRUNCATE `log_url`;
TRUNCATE `log_url_info`;
TRUNCATE `log_visitor`;
TRUNCATE `log_visitor_info`;
TRUNCATE `log_visitor_online`;

-- Session
TRUNCATE `core_session`;
TRUNCATE `api_session`;

-- Cache
TRUNCATE `core_cache`;
TRUNCATE `core_cache_option`;
TRUNCATE `core_cache_tag`;

-- Index
TRUNCATE `index_event`;
TRUNCATE `index_process_event`;

-- Captcha
TRUNCATE `captcha_log`;

-- Sent to friend
TRUNCATE `sendfriend_log`;

-- Temp and index tables
TRUNCATE `catalog_category_anc_categs_index_tmp`;
TRUNCATE `catalog_category_anc_products_index_tmp`;
TRUNCATE `catalog_category_product_index_enbl_tmp`;
TRUNCATE `catalog_product_index_eav_decimal_tmp`;
TRUNCATE `catalog_product_index_eav_tmp`;
TRUNCATE `catalog_product_index_price_bundle_opt_tmp`;
TRUNCATE `catalog_product_index_price_bundle_sel_tmp`;
TRUNCATE `catalog_product_index_price_bundle_tmp`;
TRUNCATE `catalog_product_index_price_cfg_opt_agr_tmp`;
TRUNCATE `catalog_product_index_price_cfg_opt_tmp`;
TRUNCATE `catalog_product_index_price_downlod_tmp`;
TRUNCATE `catalog_product_index_price_final_tmp`;
TRUNCATE `catalog_product_index_price_opt_agr_tmp`;
TRUNCATE `catalog_product_index_price_opt_tmp`;
TRUNCATE `catalog_product_index_price_tmp`;
TRUNCATE `cataloginventory_stock_status_tmp`;

TRUNCATE `catalog_category_anc_categs_index_idx`;
TRUNCATE `catalog_category_anc_products_index_idx`;
TRUNCATE `catalog_category_product_index_enbl_idx`;
TRUNCATE `catalog_category_product_index_idx`;
TRUNCATE `catalog_product_index_eav_decimal_idx`;
TRUNCATE `catalog_product_index_eav_idx`;
TRUNCATE `catalog_product_index_price_bundle_idx`;
TRUNCATE `catalog_product_index_price_bundle_opt_idx`;
TRUNCATE `catalog_product_index_price_bundle_sel_idx`;
TRUNCATE `catalog_product_index_price_cfg_opt_agr_idx`;
TRUNCATE `catalog_product_index_price_cfg_opt_idx`;
TRUNCATE `catalog_product_index_price_downlod_idx`;
TRUNCATE `catalog_product_index_price_final_idx`;
TRUNCATE `catalog_product_index_price_idx`;
TRUNCATE `catalog_product_index_price_opt_agr_idx`;
TRUNCATE `catalog_product_index_price_opt_idx`;
TRUNCATE `cataloginventory_stock_status_idx`;



/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;

/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;
UNLOCK TABLES;

 

Categories
#DEV

Magento Redirection to Old Site [FIX]

Recently, we had to setup a staging site for one of our customers. He runs one of the most popular online pharmacies in New Zealand. Here are some of the solutions we tried for the redirection problem. This is where Magento continues to redirect the website to the old URL. When you see this happening, try the following solutions.

1) Update base URL and secure base URL in the core_config_data table

2) TRUNCATE core_session table

3) Run the following Query in PHPMyAdmin

SET FOREIGN_KEY_CHECKS=0;
UPDATE `core_store` SET store_id = 0 WHERE code='admin';
UPDATE `core_store_group` SET group_id = 0 WHERE name='Default';
UPDATE `core_website` SET website_id = 0 WHERE code='admin';
UPDATE `customer_group` SET customer_group_id = 0 WHERE customer_group_code='NOT LOGGED IN';
SET FOREIGN_KEY_CHECKS=1;

4) Clear /var/cache/* and /var/session/* folders

5) Clear your Browser Cache/Cookies

6) CHMOD 777 the var directory to avoid having the system write in /tmp folder of your Server

7) Check your .htaccess file and make sure the URL there is changed especially if you have installed Magento in sub-folders

8) Make sure you clear APC cache

9) Make sure you have turned off Magento Compilation before backing up database and files

10) If you have set up sessions to be stored in database, you need to clear core

DELETE FROM core_session WHERE session_expires < UNIX_TIMESTAMP()

If all else fails, pray to God that you would find the right solution

Categories
#DEV

My love affair with Docker

Last few days has been the worst for our business and part of it is to do with a much-hated hosting provider – OVH. Some devs like it, and some don’t! If you read the reviews about this hosting, you’d probably find a lot of bad things said about them and their network than good ones. The only reason we prefer OVH over AWS (which we do use for most of our production apps) is to do a lot with their no questions asked policy for IPv4 addresses.

I am sure most of you know we have a shortage of IPv4 addresses. It’s been in the news, and nearly 80% of the people who heard the news probably had absolutely no clue about what was going on in the computer world. Anyway, I won’t go into explaining that for the newbies to this comp world. That would deviate me from what I want to talk about in this post. Hopefully, this will act as a guide to those who are facing similar issues as us. There is another good reason we go with OVH. They are damn cheap. Two of the basic dedicated public cloud instances cost us $50 or so to run every month. I am pretty sure Amazon can’t beat that on a month-to-month contract. They probably can beat that on a three-year lease but not a monthly contract.

Anyway, my love affair with Docker started with issues developing on our traditional OVH dedicated instance. We had all kinds of troubles. The containers we were running were close to 160 on a 32GB v2 configuration – Ubuntu 14.04 LTS. This is not too bad given that Docker shares memory across all the containers. As soon as I configured more than 160 containers, all hell broke loose. We received a whole lot of errors and the IPs that were configured stopped working. This was probably the most frustrating moment of the whole experience because there are no real guidelines on optimising memory usage for Docker. You just got to have more memory if you want to have lots of containers. Here are a couple of things to help you run a lot of

Anyway, Here are a couple of things to help you run a lot more smoothly and hopefully resolve a lot of those errors. They are in no particular order. We pretty much tried all of them out, and they work flawlessly on the virtual instances we were running. Now, I am not sure what your purpose for running Docker containers is so. So please use these commands with caution. If you are taking a step back from executing these, I’d say consult your developer or someone who knows what they are doing (Docker Expert).

1) Stop all Docker Containers

docker stop $(docker ps -a -q)

2) Remove all Docker Containers

docker rm $(docker ps -a -q)

3) Remove any volumes that are unused.

docker volume rm $(docker volume ls -qf dangling=true)

4) Remove problematic networks

docker network rm(docker network ls -q)

5) Find out if any of the processes are still occupying a port

lsof -nP | grep LISTEN

Then you’d get an output similar to this…

Dropbox             384  IPv4 0x82c      TCP 127.0.0.1:17600 (LISTEN)
com.docker.slirp   6218  IPv4 0x82c      TCP *:5432 (LISTEN) <<<MOSTLY THE PROBLEM
Python             6268  IPv4 0x82c      TCP 127.0.0.1:51617 (LISTEN)

Now, just kill it…

kill -9 6218

6) Find “docker.service” file and see it to this (Helps with starting up lots of containers)

TasksMax=infinity

7) One of the Docker Limitations includes running out of keys and all kinds of stuff. Use these commands to overcome those issues (Adjust the numbers as you see fit)

echo 4194304 > /proc/sys/kernel/pid_max
echo "20000000" > /proc/sys/kernel/keys/root_maxbytes
echo "20000000" > /proc/sys/kernel/keys/maxbytes
echo "1000000" > /proc/sys/kernel/keys/root_maxkeys
echo "1000000" > /proc/sys/kernel/keys/maxkeys

8) Docker Clean Up (because it does get dirty and its not good at cleaning itself)

docker ps --filter status=dead --filter status=exited -aq \
  | xargs docker rm -v

9) Another Docker Command to Clean things (Helps with high disk space usage)

apt-get autoclean
apt-get autoremove

 

Some other things that help would be cleaning up unused images. You can find command for it online. Ask your best friend Google. Always remember, measure the amount of RAM you would need by the footprint of your container. If your container has a footprint of 1MB, 10k containers would cost you 10GB memory. Compare that with having 100MB footprint; you would need 1TB memory. That’s a lot. If you are looking into starting up quite a lot of containers, this article is quite good (Docker insane scale on IBM Power Systems). It talks about the limitations of Docker when you want to start up lots of containers. We found this guide quite helpful.

I am in love with Docker. I have to say…it was love at first sight. It’s so Awesome! It’s useful for a lot of things, but I don’t know how much longer we’ll stay together because technology is emerging at a very fast pace. Let’s hope Docker advances in which case; it’ll be until death do us part. If not, then…yeah. I’d rather not talk about that. Here

Here are a couple of things I currently love about Docker. Docker has everything in containers and I love containers. Since 2013, the eco-system has contributed nearly 100,000 public images on Docker Hub. Love…love…love.

#1 Docker has everything in containers and I love containers. Since 2013, the eco-system has contributed nearly 100,000 public images on Docker Hub. Love, love, love.

#2 Developers love Docker and Docker loves them back. Docker provides full life cycle control and that’s important for any system architecture. It works flawlessly on practically anything. So when you wake up at 2AM in the morning for troubleshooting, you know you can switch on your laptop, run the image, and start troubleshooting the script that went bad. There are lots of other reasons why developers dig this.

#3 I have hired and spoken to lots of system developers and they love Docker. I ask them to install or configure anything, they love hitting up the Docker Hub and look for images they could use. Why? It saves them time and more so, a lot of headache with incompatibilities. So when newer technologies emerge, you can easily try them on and put them into production without having to worry about where it works and where it does. You don’t even need to worry about breaking links or dependencies.

Good Luck!