#HowTo #Internet Blog DIY Random Bits

Back from Dead | Troubleshoot slow page loads & 503 errors of WordPress blog

TL;DR: Read logs, delete useless shit, maintain the DB clear.

* Warning: An extended learn! *

Oh properly, I feel like a n00b while typing this but that’s how things roll. I’m not positive if I’ve any regular readers left, as to have regular readers one must be a daily writer, which I am clearly not. (Yes, I wish to change.)

Still, when you go back and stalk this blog, you’ll notice that submit frequency for 2018 has been abysmal at it’s greatest. But yes, there’s a cause an fascinating story behind the same. A narrative which might train you a thing or two about hosting WordPress by yourself internet hosting. It has definitely taught me a lot.


So, it began back in January once I began noticing poor loading performance of the location. Being a self-shared hosted occasion, I chalked it up to poor bandwidth or dangerous optimisation by Staff GoDaddy (not a lot constructive about them). Soon I started getting 503 error randomly on the home page, so it was time to research stuff…

Preliminary findings

I logged into GoDaddy account and went into my hosting status page. Immediately I used to be greeted with an orange banner(hyperlink right here) stating I’m reaching the resource limit and I have to improve my internet hosting plan soon to maintain up the graceful operations. I scoffed mildly to their advertising techniques and opened the boot to look underneath the hood.

I opened the CPanel and took a look over on the system panel on the left. To my amazement, virtually all of the parameters have been either terminal pink or warning orange. I appeared up the labels to know the which means of these indicators.

Pink is often my favorite color

Nicely, clearly, I used to be a bit stunned as I have an expertise of operating WordPress since 2006-ish and I have had run pretty complexly themed blogs on my potato local pc (2006 PC, yeah!) utilizing XAMMP on Home windows.

In case you are a backend guy and skim this line above now (in 2018), you will in all probability cringe more durable than you do on Nicki Minaz songs. Every little thing about that line is WRONG (2006, PC, XAMMP).

Anyway, I had a fond memory of WordPress stack being tremendous environment friendly and respectable at dealing with a mere 100+ blog posts with ~15 plugins. Especially once I was not even posting posts commonly and visitors was on a decline.

Something was improper right here.

I referred to as up GoDaddy tech gross sales help and patiently explained my drawback to him only to get his sales pitch – “Sar, I can see the upgrade banner in your account, so can you. Please give cash, we offer you moar resourcez. Oaky?“. Hmm, in all probability not that brash however you get the gist. I (mildly irritated) requested him to escalate my name to his supervisor or someone from *real* tech help.

Properly, they kinda did. A woman (I am NOT a sexist) picked up the decision and I swear to the odin that she was not capable of perceive something about wp-config and the 503 error and requested me if I have cleared my browser cache. I politely requested her to switch the decision to her supervisor.

This time a moderately mature sounding guy picked up the decision and ask my drawback. People, I used to be already 3 ranges deep and 20 minutes on the call. I still defined to him my drawback. He opened his admin console, obtained to my box and disabled all plugins (essential) and my custom theme.

The location seemed to breathe for some time and we have been capable of access the same. He informed me plainly that this can be a basic case of resource overutilization and I have to upgrade my hosting from the essential starter plan to at the least their Delux combo something plan. I made the rookie’s mistake of asking the fee for the same as he immediately stated he will simply transfer my name to his gross sales representative. *facepalm* I held up the connection earlier than they might plug me one other degree deep.

I deep down knew that I want to research this myself earlier than throwing moolah on the desk.

Lazy boi excuses; Half – I

This was February 2018. I stored my weekend free and planned to drill down into my GoDaddy shared internet hosting server to seek out the resource drawback. I used to be positive about some bug leaking memory or some infinite loop sucking out my CPU burst cycles. I deliberate to duplicate the setup on AWS t2.micro free occasion and made an account on the same. It does require a credit card on the file before letting to fireside up ec2 situations. My AMEX CC had some drawback because it debits the verification money however nonetheless stated pending for 48 hours. Truthful enough, I assumed I’ll begin in 2 days…

But all of a sudden (a software program engineer approach of shedding joyful tears!), I acquired a huge venture to work on from scratch at my last job @ Shuttl. (Yeah, I have switched career, yet once more). The venture identify rhymed with XMS. I was pretty excited to build a Python Django venture from scratch along with my 2 gifted senior teammates. I used to be completely satisfied that I will get to study a ton and can deploy an entire challenge reside AND……both of my 2 gifted senior teammates left earlier than even the completion of the primary milestone of the undertaking. Yep, just left. And I used to be struck with lots of legacy code to work on, with a little or no concept concerning the framework. I had a great experience with Flask framework but Django had some things carried out in another way.

I slogged at work and the great half was that I was capable of understand most of the code and received fairly good at Django,  carried out a ton of APIs and built a primary dashboard UI.  Anyway, that sucked subsequent 2 months of my life and I utterly forgot about this blog, the 503 challenge and meanwhile it stored getting worse as it began opening sometimes and stored throwing 503 errors for probably the most part.

Lazy boi excuses; Part – II

Nah, let’s transfer forward. I’ve shared too much personal stuff anyway. 😛

Let’s start recent? Scrap all shit.

It was around Might 2018  and I obtained some interns and a junior to assist me with new tasks that our product group was pumping out PRDs out at a tremendous price. I used to be working continuous on similar however still a window of private time opened up. Meanwhile, we migrated our code repositories to Github from Gitlab and I obtained to know concerning the idea of gh-pages.

Github pages – A neat nifty venture by Github which allows you to host stuff from your repo as easy websites or blogs. Free of charge!

This appeared like a candy chime to my ears as I used to be drained of the non-existent help from GoDaddy and their incompetent tech staff (free-tier a minimum of). I began formulating a plan to nuke bitsnapper altogether and start from scratch and make a easy Martin Fowler-esque blog.

Clear, simple and nerdy.

So, I created a simple website blog on jatinkrmalik.github.io and even posted some posts (perhaps 1). However because of lack of a lot formatting options and skill to customize stuff, was a bummer.

I lost curiosity in Github pages quicker than America did in Trump.

AND soon I resigned from Shuttl and left in July as a result of [redacted] causes.

A new beginning, AWS method?!

In late July, I joined a really early stage startup referred to as Synaptic.io after what felt like a swyamwar of supply letters. (okay, no bragging). I used to be impressed by the product and measurement of the staff which you possibly can rely on one hand. It felt profitable to get into core group, build one thing great and have an opportunity to witness progress from the within.

Anyway, Synaptic being a data-heavy company, we use lots of third social gathering providers and instruments for automated deployment to staging, prod and so on. Naturally, AWS is the spine of our deployment infra. I acquired a brand new AWS account each for staging and prod, so I started reading about the identical and obtained to find out about Bitnami WordPress AMI which comes preloaded with the WordPress stack goodies and one can deploy with a click on. It was time to reactivate my AWS account and hearth this up.

A couple of weeks in the past.

Initially of August 2018, I was lastly capable of authenticate my AWS account by punching a new credit card. I fired up a bitnami WordPress occasion and did a setup for the standard WordPress installation. Now all I had to do was simply again up stuff from GoDaddy servers and restore right here.

Sounds straightforward proper?

EXCEPT.

IT.

WAS.

NOT.

I logged into my good previous CPanel, received the FTP creds, loaded FileZilla and began the switch. The ETA was in the north of double-digit hours as the website’s public_html folder was somewhere round 1.5 GB which is comprehensible as I’ve lots of media information and videos. Truthful enough. However this once more was going to take rather a lot of time as the problem with transferring a folder is that every micro-size file (<100 kb) takes mere milliseconds to obtain however takes it’s personal sweet time to write down on the local disk when downloading from the web. The apparent means was to pack the public_html folder into a zip file and then switch.

I did an SSH into the box and ran the command zip -rv public_html.zip public_html/ to zip the listing, but one factor which I forgot was that even whereas zipping a listing, I’ll hit the identical drawback of the zip program manually iterating over all of the information (together with microsized one) and can take quite a bit of time to attempt to compress every one. I left it for 20 minutes only to seek out it solely 10% via my all information. Enchancment? Positive but I am not a very patient man.

Why is that this so slow? Oh, wait.

I appeared into the log (because of -v…verbose), and came upon that I had quite a bit of information in my public_html folder in my xcloner plugin directory resulting from some failed attempts to take website backup from a plugin. I found more such folders of some plugins which haven’t any lively position in powering up this blog.

Checking the dimensions of information within the plugins directory.

So, I deleted these folders in public_html/wp-content/plugins and tried operating the zip command once more. It was still slow and I gave up in a couple of minutes.

Clear up.Zip them em!

I google about wrapping information in a zipper with out compressing a lot and obtained to study ranges of compression in zip utility which fits from 1-9 with 1 being least compression and 9 being the very best degree of compression whereas it defaults to 6. So, I attempted again this time with butzip -1rv public_html.zip public_html/ quickly realized the iteration over gazzilion information take extra time than compression logic for the CPU.

Simply wrapping.

I learn more and came upon that making a tarball w/o compression is quicker than zip utility, so it was time to attempt that and perhaps let it complete in its own sweet time. So, I fired up the command: tar -caf public.tar public_html and left it operating.

Unsure if it ever completed…

Then I logged into phpMyAdmin (an internet app to handle MySQL occasion) to take a backup of my bitsnapper WordPress DB. I simply clicked on export and the downloaded file was of measurement 48 MBs which was odd as in UI it was displaying a DB measurement of 1.2 GB. I knew SQL backup can compress some knowledge but of this magnitude? WTF. I opened the SQL file in VS code and clearly, the file was incomplete and had some HTML gibberish at the end which on inspection was the HTML for phpMyAdmin. Bizarre?

I attempted exporting the DB as soon as again from the UI and this time the dimensions of the backup.sql file was 256 MBs. I felt this was applicable but my instinct did a proper click on and opened in my editor as soon as again. Certainly enough the file was nonetheless incomplete with that gibberish. Truthful to say, the backup from phpMyAdmin was corrupted.

prime

I did an ssh into my internet hosting box using the creds in my GoDaddy account and tried every little thing from checking the output of system instructions like:prime, ps -ef, free but the box is properly sandboxed by GoDaddy to avoid any unauthorised access. I even tried to do a privilege escalation with intention of gaining extra management over my hosting account and perhaps restart mysqld however all in useless.

pssudo?

I knew about taking direct DB backups from the shell utilizing mysqldump -h -u -p > db_backup.sql so it was time to attempt that. I ran the command and tailed the backup SQL file with tail -f db_backup.sql to look into its content material because it populated. It began exporting DB nicely and as I started feeling badass and went to seize a cup of espresso, the terminal introduced me with the error message:

man mysqldumpFirst try.

I googled about the problem and it had something to do with the max_alllowed_packet variable of MySQL. The one two methods to vary that was either my modifying /and so forth/my.cnf file (which I used to be positive I didn’t have sudo entry to) or run SET GLOBAL max_allowed_packet=1073741824; query within the MySQL console.

Admin? No? Sorry.

Yeah, both of them didn’t work. Obviously. You want respective system admin consumer access for both.

The roadblock was getting stupidly irritating, and I had to get the backup.  I googled more and somebody prompt to cross the max_allowed_packet variable with the mysqldump command as.–max-allowed-packet=1073741824 Tried that too, didn’t work.

With –max_allowed_packet

I was tired and needed to sleep, so I terminated my ec2 occasion and slept.

TODAY.

At present I used to be feeling motivated and deliberate to look into the problem from another angle.

As an alternative of utilizing the WordPress AMI, I made a decision to create the whole setup from scratch. I launched an occasion of ec2 with Amazon Linux AMI. The goal was to know if that is actually GoDaddy messing with me or is it some fault in my database which is leading to the entire shebang.

I used this submit as steerage to arrange every little thing from grounds up.

I logged in once more to my GoDaddy account to be greeted by the orange banner urging me to upgrade. I felt weak and was nearly to click improve and throw some dough to get the straightforward means out. But no, that’s towards the hacker mentality I work with.

So, I opened the CPanel, phpMyAdmin and tried taking a backup again. It again downloaded a 250-something MB file with gibberish at the end. I manually eliminated the last half of the file and uploaded it to my ec2 instance by way of scp and imported it into my distant MySQL occasion.

After importing the public_html information, importing SQL backup and configuring wp-config.php file with DB host and creds, I restarted each httpd (Apache server) and MySQL (DB server) and opened http://ec2-instance-url:80 and to my partial euphoria, it did load up my header and footer for bitsnapper however no posts have been seen.

Hmm… something was missing.

I seemed into the tables on phpMyAdmin and my MySQL server on the ec2 occasion and duh, my wp_xxx_posts table and wp_xxx_postsmeta was lacking. Yeah! So, the problem was that my DB measurement has such giant that Godaddy shared hosting limited bandwidth was not allowing me to take a backup of the whole DB. Clearly, I had to repair this.

I wrote a custom python script to take a backup of the bitsnapper DB table-by-table to avoid hitting the max_allowed_packet limit as observed last week however the identical error mysqldump: Error 2013: Misplaced connection to MySQL server throughout question when dumping table wp_xxx_postmeta  stored popping up.

I started my intense googling session as soon as again and queried alongside the line of  ‘how you can backup DB from GoDaddy shared internet hosting‘ and ‘GoDaddy + shared hosting + mysqldump + error 2013‘ and by some means by a fluke, I landed on the Backup part of my ….drumroll?….. CPanel! *facepalm*

Facepalm second!

It had every thing I was making an attempt to do above with a flick of a click. I ended all my previous efforts and download the complete web site backup which had each public_html folder and DB backup SQL. The whole archive was nonetheless 2.5 GB which was big for a small blog like this.

Anyway, I did a scp to my ec2 instance and tried to exchange the information in /var/www/html with public_html/ and restored mysql backup by way of mysql -u -p < db_backup.sql and this time it labored with out an error. I restarted my mysql and apache http server by service mysql restart; serivce httpd restart; and tried to load http://:80 and presto! the entire website loads up.

The euphoria this time also lasted for a brief burst as within 2-3 reloads the replicated website again began throwing up the same 503 error and my shell session chocked up.

503! Not again.

I fired up a brand new terminal and tried SSHing into the box however the box just turned unresponsive. I went to the AWS admin console to examine my ec2 instance monitoring for the machine parameters and observed an identical sample as with the CPanel (left panel) parameter console.  It was all purple and orange as soon as again.

Clearly, GoDaddy’s internet hosting wasn’t the only wrongdoer.

Real IT help == Self-troubleshooting!

It was then once I decided to shed my worry of peeping into the DB tables because it was a gone trigger anyway and I nonetheless had a partial backup from last yr once I migrated the blog from WordPress managed internet hosting to standard shared internet hosting box.

Boys, it was time to run some queries. The first thing I did was to login to the phpMyAdmin and lookup the tables, schema and properties. I used to be assured that the issue is with the DB measurement and that’s the rationale for slow queries which is chocking up the CPU burst time.

I seemed into tables and located table wp_xxx_postmeta to be around 950 MB in measurement with simply 11000 data. This immediately fired up alarms in my head as I have labored with multi-million row DBs during my stint in Adobe & Shuttl and the table measurement was principally in the vary of few MBs solely. A tough again of the notice calculation said a median measurement of 100 kb per report on this table which was weird because once I seemed up the schema for a similar, it simply was storing 4 data i.e. meta_id, post_id, meta_key, meta_value.

Hey DB, you cray?

It was time to prod this table and understand the info inside it. I fired up a simple query:

SELECT meta_key, rely”text” FROM ‘wp_s4w671g0kp_postmeta’ GROUP by meta_key order by rely”text” desc;

Lo behold, the end result was a bit shocking as till now I used to be considering that this desk may include submit revisions or metadata solely but the query outcome was something like this:

meta_key rely”text”
_total_views

1895

_view_ip_list

1892

_jetpack_related_posts_cache

1206

_wp_attached_file

1144

_wp_attachment_metadata

1101

wp-smpro-smush-data

1069

_wp_attachment_image_alt

903

_edit_lock

155

_edit_last

140

_yoast_wpseo_focuskw

118

_yoast_wpseo_linkdex

118

_thumbnail_id

116

_yoast_wpseo_metadesc

114

_publicize_twitter_user

107

_wpas_done_all

103

_total_likes

77

_like_ip_list

77

_wpas_skip_3914702

75

_wpas_skip_11104850

72

_yoast_wpseo_title

56

_wp_attachment_backup_sizes

40

_yoast_wpseo_focuskw_text_input

38

_wp_old_slug

35

essb_hideplusone

29

essb_hidevk

29

Do you see it? There are some 1800+ data for _view_ip_list, _total_views,  _jetpack_related_posts_cache which is principally nothing however data originated from WordPress personal homegrown fashionable plugin – Jetpack. I googled a bit about security delete for these data, didn’t find anything,  took a leap of religion and executed:

Delete from FROM ‘wp_xxx_postmeta’ WHERE meta_key = ‘_view_ip_list’;

Delete from FROM ‘wp_xxx_postmeta’ WHERE meta_key = ‘_total_views’;

Delete from FROM ‘wp_xxx_postmeta’ WHERE meta_key = ‘_jetpack_related_posts_cache’;

It deleted some 4,000 data out of 11,000 data it had and look what happened once I refreshed phpMyAdmin?

All cleaned up!

Yus! My wp_xxx_postmeta table measurement dropped from 900-something MBs to 6.Three MBs by just deleting ~4,000 data. What sick joke is that? My complete DB measurement dropped to 25 MBs from ~1.2 GBs, in all probability because of the cascade effect of overseas key constraints of the data I deleted.

Outcome?

My web site was a breeze once once more. The load time went down considerably, in all probability as a result of of quicker DB queries and even the system monitoring parameters on the CPanel went down from Purple/Orange to Inexperienced. I did some load testing by executing a number of curl requests to my house page by way of terminal, and the server was not breaking
any sweat.

Take a look at that RAM usage!  WHAT? Keep in mind once I talked about operating WordPress stack on my 2006 PC with some 128 MB of memory? Yeah!

So fast much wow!


Classes?

  • Troubleshooting Godaddy is a long-term thing. You’ll be able to either get into the shit and fix it yourself or you can begin throwing money on the display until you escalate to their core tech workforce which I assume might be by no means for shared internet hosting plans. They could have a terrific help for devoted servers though.
  • AWS is a f##king superb piece of tech. If you understand how to harden servers, by all means, simply drop these legacy internet hosting providers and go on your personal setup. It’s in all probability cheaper, quicker and extra VFM. A easy t2.micro instance will value < ₹700/month. (Perhaps extra, sigh world financial system!).
  • Typically, being a smartass isn’t good. Most of the occasions, nevertheless, it retains you protected.
  • All the time examine logs. -_-