Migrate a WordPress Site to AWS

Migrate a WordPress site to AWS

In a previous article I discussed launching a website on AWS. The project was framed as transferring a static site from another hosting provider. This post will extend that to migrating a dynamic WordPress site with existing content.

Install WordPress

After following the steps to launch your website to a new AWS EC2 instance, you’ll be able to connect via sFTP. I use FileZilla as my client. You’ll need the hostname (public DNS), username (ec2-user in this example), and key file for access. The latest version of WordPress can be downloaded from wordpress.org. Once connected to the server, I copy those files to the root web directory for my setup: /var/www/html

Make sure the wp-config.php file has the correct details (username, password) for your database. You should use the same database name from the previous hosting environment.

Data backup and import

It is crucial to be sure we don’t lose any data. I make a MySql dump of the current database and copy the entire wp-content folder to my local machine. I’m careful to not delete or cancel the old server until I am sure the new one is working identically.

Install phpMyAdmin

After configuring my EC2 instance, I install phpMyAdmin so that I can easily import the sql file.

sudo yum install php-mbstring -y
sudo systemctl restart httpd
sudo systemctl restart php-fpm
cd /var/www/html
wget https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.tar.gz
mkdir phpMyAdmin && tar -xvzf phpMyAdmin-latest-all-languages.tar.gz -C phpMyAdmin --strip-components 1
rm phpMyAdmin-latest-all-languages.tar.gz
sudo systemctl start mariadb

The above Linux commands installs the database management software on the root directory of the new web server. It is accessible from a browser via yourdomainname.com/phpMyAdmin. This tool is used to upload the data to the new environment.

phpMyAdmin import screen

Create the database and make sure the name matches what’s in wp-config.php from the last step. Now you’ll be able to upload your .sql file.

Next, I take the wp-content folder that I stored on my computer, and copy it over to the new remote. At this point, the site homepage will load correctly. You might notice other pages won’t resolve, and will produce a 404 “not found” response. That error has to do with certain Apache settings, and can be fixed by tweaking some options.

Server settings

With my setup, I encountered the above issue with page permalinks . WordPress relies on the .htaccess file to route pages/posts with their correct URL slugs. By default, this Apache setup does not allow its settings to be overridden by .htaccess directives. To fix this issue, the httpd.conf file needs to be edited. Mine was located in this directory: /etc/httpd/conf

You’ll need to find (or create) a section that corresponds to the default document root: <Directory “/var/www/html”></Directory>. In that block, they’ll be a AllowOverride command that is set to “None”. That needs to be changed to “All” for our configuration file to work.

apache config settings found in the HTTPD conf file

Final steps

After all the data and content has been transferred, do some smoke-testing. Try out as many pages and features as you can to make sure the new site is working as it should. Make sure you keep a back-up of everything some place secure (I use an S3 bucket). Once satisfied, you can switch your domain’s A records to point at the new box. Since the old and new servers will appear identical, I add a console.log(“new server”) to the header file. That allows me tell when the DNS update has finally resolved. Afterwards, I can safely cancel/decommission the old web hosting package.

Don’t forget to make sure SSL is setup!

Updates

AWS offers an entire suite of services to help businesses migrate. AWS Application Migration Service is a great choice to “simplify and expedite your migration while reducing costs”.

Upgrade PHP

In 2023, I used this blog post to stand-up a WordPress website. I was using a theme called Balasana. When I would try to set “Site Icon” (favicon) from the “customize” UI I would receive a message stating that “there has been an error cropping your image“. After a few Google searches, and also asking ChatGPT, the answer seemed to be that GD (a graphics library) was either not installed or not working properly. I played with that for almost an hour, but with no success. GD was installed, and so was ImageMagick (a back-up graphics library that WordPress falls back on).

The correct answer was that I needed to upgrade PHP. The AWS Linux 2 image comes with PHP 7.2. Upgrading to version 7.4 did the trick. I was able to make that happen, very painlessly, by following a blog post from Gregg Borodaty . The title of his post is “Amazon Linux 2: Upgrading from PHP 7.2 to PHP 7.4” (thanks Gregg).

Update

My recommendation, as of 2024, is to use a managed WordPress service. I wrote a post about using AWS Lightsail for that purpose: Website Redesign with WordPress Gutenberg via AWS Lightsail

 

Lazy Load Images and Assets on WordPress with IntersectionObserver

wordpress homepage design

I write online a lot. Adding articles to this blog serves to build a catalog of technical solutions for future reference. Updating the homepage has improved user experience and SEO. The new design displays the most recent articles as clickable cards, rather than listing the entire text of each one. The changes for this were added to index.php file, in the child-theme folder. The theme’s original code already used a While() loop to iterate through the post records. My modification removed the article content, and only kept the title and image:

<div class="doc-item-wrap">
	<?php
	while ( have_posts() ) {
		the_post();
		echo "<div class='doc-item'><a href='". get_the_permalink() ."'><img class='lazy' data-src='".get_the_post_thumbnail_url()."'><h2>" . get_the_title() . "</h2></a></div>";
	} ?>
</div> <!-- doc-item-wrap -->

I used custom CSS, leveraging Flexbox, to style and position the cards:

.doc-item-wrap{
    display: flex;
    flex-wrap: wrap;
    justify-content: center;
}
.doc-item{
    width: 30%;
    padding: 20px;
    border: 3px solid #f0503a;
    margin: 15px;
    background: black;
    flex-grow: 1;
    text-align: center;
}
.doc-item:hover{
    background-color: #34495e;
}
.doc-item p{
    margin: 0px;
    line-height: 40px;
    color: white;
}
.doc-item img{
    display: block;
    margin: 0 auto;
}
.doc-item h2{
    font-size: 22px;
    color: white;

}
@media(max-width: 1000px){
	.doc-item{
		width: 45%
	}
}
@media(max-width: 700px){
	.doc-item{
		width: 100%
	}
}

The media queries adjust the size of the cards (and how many are in a row), based on screen size.

Look and feel of the design

Card layout design is a common way to arrange blog content. It gives visitors a visual overview of what’s available. It also stops the homepage from duplicating content that’s already available on the individual post pages.

You can see this pattern throughout the digital world. Card layout translates well across screen sizes and devices. Since I put much effort into writing, making it organized was a priority. This implementation can be extended to add additional content (such as date, description, etc.) and features (share links, animations, expandability). And, it fits nicely with what WordPress already provides.

Lazy loaded images

Image content can often be the biggest drag to site speed. Lazy loading media defers rendering until it is needed. Since this blog’s homepage has an image for each post, this was essential.

While iterating through post records the image URL is assigned to a custom data-src attribute on the image tag, leaving the normal src blank. This assures the image is not immediately retrieved nor loaded. I wrote a JavaScript function to lazy load the images, relying on the IntersectionObserver API. The card’s image does not load until a user scrolls it into view. This improves the speed of the page, which has a positive effect on SEO and UX.

The code creates a IntersectionObserver object.  It observes each of the image elements, checking to see if they are within the browser viewport. Once the image elements come into view, it takes the image URL from the data-src attribute, and assigns it to the tag’s src – causing the image to load.

document.addEventListener("DOMContentLoaded", function() {
  var lazyImages = [].slice.call(document.querySelectorAll("img.lazy"));
  if ("IntersectionObserver" in window) {
    let lazyImageObserver = new IntersectionObserver(function(entries, observer) {
      entries.forEach(function(entry) {
        if (entry.isIntersecting) {
          let lazyImage = entry.target;
          lazyImage.src = lazyImage.dataset.src;
          // lazyImage.srcset = lazyImage.dataset.srcset;
          lazyImage.classList.remove("lazy");
          lazyImageObserver.unobserve(lazyImage);
        }
      });
    });

    lazyImages.forEach(function(lazyImage) {
      lazyImageObserver.observe(lazyImage);
    });
  } 
});

The original JS code was referenced from a web.dev article. Web.dev is a resource created by Google that provides guidance, best practices, and tools to help web developers build better web experiences.

You can also use this same method for lazy loading videos, backgrounds, and other assets.

IntersectionObserver to lazily load JavaScript assets

I discovered another application for the IntersectionObserver implementation that I used above to load lazy load images. The Google Lighthouse performance score on my homepage was being dinged due to the “impact of third-party code”.

impact of 3rd party js on performance scores

The largest third-party resource was is used for the reCaptcha on a contact form at the bottom of my homepage. It makes sense to not load it until the user scrolls down to that section – especially because the UX expects them to take time to fill out the form before hitting “submit” anyway.

Using the same pattern as above, I created a new `IntersectionObserver` and passed the contact form section as the target:

function captchaLazyLoad(){
	contactCaptchaTarget = document.getElementById('contactSection')
	if (!contactCaptchaTarget) {
        return;
    }
	let contactCaptchaObserver = new IntersectionObserver(function(entries, observer) {
		if (entries[0].isIntersecting) {
            var script = document.createElement('script');
		    script.src = "https://www.google.com/recaptcha/api.js";
		    document.body.appendChild(script);
            contactCaptchaObserver.unobserve(contactCaptchaTarget);
        }
	})
	contactCaptchaObserver.observe(contactCaptchaTarget);
}

I included this function to the already existing `DOMContentLoaded`  event listener just below the loop to observe lazy image elements:

<script>
function captchaLazyLoad(){
	contactCaptchaTarget = document.getElementById('contactSection')
	if (!contactCaptchaTarget) {
        return;
    }
	let contactCaptchaObserver = new IntersectionObserver(function(entries, observer) {
		if (entries[0].isIntersecting) {
            var script = document.createElement('script');
		    script.src = "https://www.google.com/recaptcha/api.js";
		    document.body.appendChild(script);
            contactCaptchaObserver.unobserve(contactCaptchaTarget);
        }
	})
	contactCaptchaObserver.observe(contactCaptchaTarget);
}

document.addEventListener("DOMContentLoaded", function() {
  var lazyImages = [].slice.call(document.querySelectorAll("img.lazy"));
  if ("IntersectionObserver" in window) {
    let lazyImageObserver = new IntersectionObserver(function(entries, observer) {
      entries.forEach(function(entry) {
        if (entry.isIntersecting) {
          let lazyImage = entry.target;
          lazyImage.src = lazyImage.dataset.src;
          // lazyImage.srcset = lazyImage.dataset.srcset;
          lazyImage.classList.remove("lazy");
          lazyImageObserver.unobserve(lazyImage);
        }
      });
    });

    lazyImages.forEach(function(lazyImage) {
      lazyImageObserver.observe(lazyImage);
    });

    //
    captchaLazyLoad()

  } else {
    // Possibly fall back to a more compatible method here if IntersectionObserver does not exist on the browser
  }

});

</script>

This update raised my Lighthouse performance score by fifteen points!

Building my career in tech as a programmer

Anthony Pace's resume and portfolio

Building a fulfilling career can seem daunting. Technology and programming is a great option in today’s world. Resources and opportunities are abundant. You can work from anywhere and help build the future. When I started out, I faced challenges, doubt, and struggle. The ride has been worth it, and I’m excited to keep moving forward.

Starting out

About half way through college, I decided to dropout. I was majoring in Philosophy at a small school in New York.  My main source of income was delivering pizza in the Bronx.

A decade earlier, I found computer programming. I spent my nights coding desktop applications, learning HTML, and exploring the web. Those early days of technology laid the foundation for what would be my career.

When I left school in 2007, I wasn’t sure what to do next. I started earning money in tech that same year by starting a business. It focused on creating blogs and producing content. Ads and affiliate programs served to generate revenue.

It wasn’t as lucrative as I hoped. The real value came from the web development skills I honed. The software and technologies I used then, I still rely on today.

WordPress, Linux, and PHP. Writing, SEO, and digital marketing. These were the bricks I used to form the ground floor of my career in tech.

Service worker

While my early stint at entrepreneurship didn’t make me wealthy, it proved valuable. I managed to grow a freelance business leveraging this experience.

Networking and word-of-mouth were my primary means of growth. After printing business cards, I would give them to everyone I met. While delivering pizzas, I would hand them out to any small businesses or shops I passed.

I found my first paying customer in 2008. Since then, my client list has grown to triple digits.

The services I’ve offered range beyond web development. I’ve designed logos and written copy. I’ve managed infrastructure: web hosting, domain names, email, and more.

I have designed and managed both print and digital marketing campaigns. I’ve given strategy advice to young startups. Truly full stack: business, technology, and design. This has been a theme that has rung true my entire career.

The lessons learned during this period were ones of hard-work and getting the job done. The most valuable skills translate across industries. Finding clients fuels the engine of any business. The art of pitching and selling is a career-long study. Being able to manage business needs has proven to be foundational.

Office life

By 2011 I landed my first in-house gig, working at a marketing company. It felt like a turning point. I was the only developer, and got to deal directly with clients. I worked there for less than a year.

In 2012 I connected with a recruiter for the first time. They set me up on many interviews. I clicked with a small medical education company based in Manhattan. Hired as a web developer, I graduated to senior engineer and marketing specialist.

Team work

There, I was the head of all things digital. That meant building websites, coding native apps, and managing infrastructure. After a promotion to head of marketing my responsibilities expanded. Managing analytics took time. Copywriting promotional materials required patience. My horizons expanded while coordinating live events, and traveling internationally to exhibition shows.

Educational grants funded our projects. They included apps, websites, live events, and digital newsletters. Having a coordinated team was imperative to making things work. The project management and leadership was world-class and invaluable.

A single project was multifarious. I would design responsive layouts, build registration websites, deploy apps, and more. Once a product would launch, I would travel to live events to handle promotion and logistics. While I fulfilled many roles, I was lucky to work with a talented group.

Software Engineer

After four years, I made the difficult decision to leave the job that helped shape my career. A better opportunity presented itself in 2016. I was hired as a software engineer. This is when I came into my own as a programmer. I was able to collaborate with a brilliant team. The technologies I became familiar with continued to grow.

I got to work with early-stage startups and brands backed by venture capital. I learned the intricacies of building digital products and growing direct-to-consumer brands. My colleagues included entrepreneurs, CEOs, and product experts. The office was exciting and full of talent.

At the time of writing this (2020), we are stuck in quarantine due to the COVID-19 pandemic. We’re working remotely, but continuing to grow. Uncertain times prompt us to evaluate our circumstances and take inventory of what we value. What is the future of my career? How does it play into my life overall?

What’s next?

I love what I do for a living. I enjoy programming; I love problem solving; I’m an artist at heart. I plan on continuing to build software products. Chances are, I’ll be doing it somewhere other than New York City – especially since remote work seems to be the future of business.

If you’re thinking about starting a career in technology as a programmer, my advice is to jump right in. Start building, keep learning, and put yourself out there. If anyone reading this wants to chat about careers, technology, programming, or anything else, feel free to email me!

Statistics for hypothesis testing

statistics

Recently, I had to implement a function into SplitWit to determine statistical significance. Shortly after, I wrote a blog post that explains what statistical significance is, and how it is used.

Here, I’ll attempt to expound the code I wrote and what I learned along the way.

Statistical significance implies that the results of an experiment didn’t happen by random chance. SplitWit specifically deals with running A/B experiments (hypothesis testing) to help increase conversion rates. Getting this right is important.

Probability

Statistical significance is determined by calculating a “probability value”. The lower that value is, the more confident we can be that our results are probably not random. Statisticians refer to it as “p-value”. In this field of study, a p-value less than 0.05 is considered good. That translates to a less than 5% chance that the results of an experiment are due to error. Source.

When the probability of a sampling error is that low, we are said to have rejected the “null hypothesis” and affirmed our “alternative hypothesis”. Our alternative hypothesis, in the context of A/B website testing, refers to the experimental UI change we made successfully increasing our conversion rate. The null hypothesis represents the idea that our changes had no true affect on any improvement we may have seen.

SplitWit experiment results
SplitWit lets you set custom metrics to measure conversion rates

How do we calculate our p-value for an A/B split test on a eCommerce website?

Here are the steps to figure out if a hypothesis test is statistically significant:

  1. Determine the conversion rates for the control and variation
  2. Work out the standard error of the difference of those rates
  3. Derive a z-score using the conversion rates and the standard error
  4. Convert that z-score to a p-value
  5. Check if that p-value is below the desired confidence level (<0.05 for 95% confidence)

This recipe is based on a field of study called Bayesian statistics. It attempts to describe a degree of certainty for our experiments.

Conversion rates

The goal of running A/B split tests is to increase your website’s conversion rate. The idea is to make a UI change, show that variation to a portion of users, and then measure conversions against the control version. I calculate conversion rates by dividing the number of conversions by the number of visitors, giving me an average:

$control_conversion_rate = $control_conversions/$control_visitors;
$variation_conversion_rate = $variation_conversions/$variation_visitors;

Which ever version’s conversion rate is higher is the winner of the experiment. I calculate the winner’s uptick in conversion rate using this formula: (WinningConversionRate – LosingConversionRate) / LosingConversionRate

$uptick = 0;
if($control_conversion_rate > $variation_conversion_rate){
	$uptick = (($control_conversion_rate - $variation_conversion_rate) / ($variation_conversion_rate)) * 100;
}

if($control_conversion_rate < $variation_conversion_rate){
	$uptick = (($variation_conversion_rate - $control_conversion_rate) / ($control_conversion_rate)) * 100;
}

 

Calculating p-value

After researching, I determined that I would calculate my p-value from a “z-score”.

$p_value = calculate_p_value($z_score);

A z-score (also known as a standard score) tells us how far a data point is from the mean. Source.

For the purposes of A/B testing, the data points we are interested in is the conversion rates of our control and variation versions. Consider this code snippet for determining our z-score:

$z_score = ($variation_conversion_rate-$control_conversion_rate)/$standard_error;

This formula takes the difference between the two conversion rates, and divides it by their “standard error”. The standard error is meant to tell us how spread out our data is (sampling distribution). Source.

Standard error of two means’ difference

A conversion rate is essentially an average (mean). To properly determine our z-score, we’ll want to use the standard error of their difference.

First, we’d want to get the standard error of each of those rates. Source.

This is the formula to use: ( conversion_rate * ( 1 – conversion_rate ) / visitors )1/2

Translated as PHP code:

$standard_error_control = sqrt($control_conversion_rate * (1-$control_conversion_rate) / $control_visitors;)
$standard_error_variation = sqrt($variation_conversion_rate * (1-$variation_conversion_rate) / $variation_visitors);

Then, we’d use those values to find the standard error of their difference.

This is the formula: ( standard_error_control2 + standard_error_variation2  )1/2

Translated as PHP code:

$x = pow($standard_error_control, 2) + pow($standard_error_variation, 2);
$standard_error_of_difference = sqrt($x);

If we skip squaring our values in the 2nd step, we can also skip getting their square root in the first. Then, the code can be cleaned up, and put into a function:

public function standardErrorOfDifference($control_conversion_rate, $variation_conversion_rate, $control_visitors, $variation_visitors){
		
	$standard_error_1 = $control_conversion_rate * (1-$control_conversion_rate) / $control_visitors;
	$standard_error_2 = $variation_conversion_rate * (1-$variation_conversion_rate) / $variation_visitors;
	$x = $standard_error_1 + $standard_error_2;

	return sqrt($x);

}

This algorithm represents the “difference between proportions” and can be expressed by this formula: sqrt [p1(1-p1)/n1 + p2(1-p2)/n2]

Source.

Simplified even further, as PHP code:

$standard_error = sqrt( ($control_conversion_rate*(1-$control_conversion_rate)/$control_visitors)+($variation_conversion_rate*(1-$variation_conversion_rate)/$variation_visitors) );

Statistics as a service

Having considered all of these steps, we can put together a simple method to determine statistical significance. It takes the number of visitors and conversions for the control and the variation.

public function determineSignificance($controlVisitors, $variationVisitors, $controlHits, $variationHits){
	
	$control_conversion_rate = $control_hits/$control_visitors;
	$variation_conversion_rate = $variation_hits/$variation_visitors;

	$standard_error = sqrt( ($control_conversion_rate*(1-$control_conversion_rate)/$control_visitors)+($variation_conversion_rate*(1-$variation_conversion_rate)/$variation_visitors) );

	$z_score = ($variation_conversion_rate-$control_conversion_rate)/$standard_error;

	$p_value = $this->calculate_p_value($z_score);

	$significant = false;
	
	if($p_value<0.05){
		$significant = true;
	}else{
		$significant = false;
	}
        return $significant;
}

You can find this code on GitHub.

Additional References

https://github.com/markrogoyski/math-php#statistics—experiments

https://www.evanmiller.org/how-not-to-run-an-ab-test.html

https://codereview.stackexchange.com/questions/85508/certainty-of-two-comparative-values-a-b-results-certainty

https://www.mathsisfun.com/data/standard-deviation-formulas.html

https://sites.radford.edu/~biol-web/stats/standarderrorcalc.pdf

https://help.vwo.com/hc/en-us/articles/360033991053-How-Does-VWO-Calculate-the-Chance-to-Beat

https://towardsdatascience.com/the-math-behind-a-b-testing-with-example-code-part-1-of-2-7be752e1d06f

https://abtestguide.com/calc/

Automatic MySQL dump to S3

Automatic MySQL dump to S3

I have had some lousy luck with databases. In 2018, I created a fitness app for martial artists, and quickly gained over a hundred users in the first week. Shortly after, the server stopped resolving and I didn’t know why. I tried restarting it, but that didn’t help. Then, I stopped the EC2 instance from my AWS console. Little did I know, that would wipe the all of the data from that box. Ouch.

Recently, a client let me know that their site wasn’t working. A dreaded “error connecting to the database” message was all that resolved. I’d seen this one before – no sweat. Restarting the database usually does the trick: “sudo service mariadb restart”. The command line barked back at me: “Job for mariadb.service failed because the control process exited with error code.”

Uh-oh.

The database was corrupted. It needed to be deleted and reinstalled. Fortunately, I just happen to have a SQL dump for this site saved on my desktop. This was no way to live – in fear of the whims of servers.

Part of the issue is that I’m running MySQL on the same EC2 instance as the web server.  A more sophisticated architecture would move the database to RDS. This would provide automated backups, patches, and maintenance. It also costs more.

To keep cost low, I decided to automate MySQL dumps and upload to an S3 bucket. S3 storage is cheap ($0.20/GB), and data transfer from EC2 is free.

Deleting and Reinstalling the Database

If your existing database did crash and become corrupt, you’ll need to delete and reinstall it. To reset the database, I SSH’d into my EC2 instance. I navigated to `/var/lib/mysql`

cd /var/lib/mysql

Next, I deleted everything in that folder:

sudo rm -r *

Finally, I ran a command to reinitialize the database directory

mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql

Reference: https://serverfault.com/questions/812719/mysql-mariadb-not-starting

Afterwards, you’ll be prompted to reset the root password.

A CLI prompt to reset the root password after installing mariadb

You’ll still need to import your sql dump backups. I used phpMyAdmin to do that.

Scheduled backups

AWS Setup

The first step was to get things configured in my Amazon Web Services (AWS) console. I created a new S3 bucket. I also created a new IAM user, and added it to a group that included the permission policy “AmazonS3FullAccess”.

AWS policy to allow full S3 access
This policy provides full access to all buckets.

I went to the security credentials for that user, and copied down the access key ID and secret. I would use that info to access my S3 bucket programatically. All of the remaining steps take place from the command line, via SSH, against my server. From a Mac terminal, you could use a command like this to connect to an EC2 instance:

ssh -i /Users/antpace/Documents/keys/myKey.pem ec2-user@ec2-XX-XX-XX.us-west-2.compute.amazonaws.com

Once connected, I installed the software to allow programatic access to AWS:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Here is the reference for installing the AWS CLI on Linux.

Shell script

Shell scripts are programs that can be run directly by Linux. They’re great for automating tasks. To create the file on my server I ran: “nano backup.sh”. This assumes you already have the nano text editor installed. If not: “sudo yum install nano” (or, “sudo apt install nano”, depending on your Linux flavor).

Below is the full code I used. I’ll explain what each part of it does.

Credit: This code was largely inspired by a post from Marcelo Gornstein.

#!/bin/bash
AWS_ACCESS_KEY_ID=XXX \
AWS_SECRET_ACCESS_KEY=XXX \
S3_BUCKET=myBucketsName \
MYSQL_HOST=localhost \
MYSQL_PORT=3306 \
MYSQL_USER=XXX \
MYSQL_PASS=XXX \
MYSQL_DB=XXX \

cd /tmp
file=${MYSQL_DB}-$(date +%a).sql
mysqldump \
  --host ${MYSQL_HOST} \
  --port ${MYSQL_PORT} \
  -u ${MYSQL_USER} \
  --password="${MYSQL_PASS}" \
  ${MYSQL_DB} > ${file}
if [ "${?}" -eq 0 ]; then
  gzip ${file}
  AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} aws s3 cp ${file}.gz s3://${S3_BUCKET}
  rm ${file}.gz
else
  echo "sql dump error"
  exit 1
fi

The first line tells the system what interpreter  to use: “#!/bin/bash”. Bash is a variation of the shell scripting language. The next eight lines are variables that contain details about my AWS S3 bucket, and the MySQL database connection.

After switching to a temporary directory, the filename is built. The name of the file is set to the database’s name plus the day of the week. If that file already exists (from the week previous), it’ll be overwritten.  Next, the sql file is created using mysqldump and the database connection variables from above. Once that operation is complete, then we zip the file, upload it to S3, and delete the zip from our temp folder.

If the mysqldump operation fails, we spit out an error message and exit the program. (Exit code 1 is a general catchall for errors. Anything other than 0 is considered an error. Valid error codes range between 1 and 255.)

Before this shell script can be used, we need to change its file permissions so that it is executable: “chmod +x backup.sh”

After all of this, I ran the file manually, and made sure it worked: “./backup.sh”

Sure enough, I received a success message. I also checked the S3 bucket and made sure the file was there.

S3 file dump

Scheduled Cronjob

The last part is to schedule this script to run every night. To do this, we’ll edit the Linux crontab file: “sudo crontab -e”. This file controls cronjobs – which are scheduled tasks that the system will run at set times.

The file opened in my terminal window using the vim text editor – which is notoriously harder to use than the nano editor we used before.

I had to hit ‘i’ to enter insertion mode. Then I right clicked, and pasted in my cronjob code. Then I pressed the escape key to exit insertion mode. Finally, I typed “wq!” to save my changes and quit.

Remember how crontab works:

minute | hour | day-of-month | month | day-of-week

I set the script to run, every day, at 2:30am:

30 2 * * * sudo /home/ec2-user/backup.sh

And that’s it. I made sure to check the next day to make sure my cronjob worked (it did). Hopefully now, I won’t lose production data ever again!

Updates

Request Time Too Skewed (update)

A while after setting this up, I randomly checked my S3 buckets to make sure everything was still working. Although it had been for most of my sites, one had not been backed up in almost 2 months! I shelled into that machine, and tried running the script manually. Sure enough, I received an error: “An error occurred (RequestTimeTooSkewed) when calling the PutObject operation: The difference between the request time and the current time is too large.

I checked the operating system’s current date and time, and it was off by 5 days. I’m not sure how that happened. I fixed it by installing and running “Network Time Protocol”:

sudo yum install ntp
sudo ntpdate ntp.ubuntu.com

After that, I was able to run my backup script successfully, without any S3 errors.

 


Nano text-editor tip I learned along the way:

You can delete chunks of text content using Nano. Use CTRL + Shift + 6 to enter selection mode, move the cursor to expand the block, and press CTRL + K to delete it.

Additional References:

https://www.javatpoint.com/steps-to-write-and-execute-a-shell-script

https://www.sitepoint.com/cron-jobs/

Drop down with CSS arrow

blog post about css menu with arrow

Here’s a quick one about how to create a drop-down UI element with an arrow via CSS. The aim is to create a menu that has drop-down sub-menus. Each drop-down should have an arrow that points up towards the parent element.

Here’s the HTML to structure the menu:

<div class="menu">
  <div class="menu-item">
    <span>Menu Item</span>
    <div class="drop-down-menu">
      <p>Sub-item</p>
      <p>Sub-item</p>
      <p>Sub-item</p>
    </div>
  </div>
  
  <div class="menu-item">
    <span>Menu Item 2</span>
  </div>
  
  <div class="menu-item">
    <span>Menu Item 3</span>
    <div class="drop-down-menu">
      <p>Sub-item</p>
      <p>Sub-item</p>
      <p>Sub-item</p>
    </div>
  </div>
  
</div>

I nest the sub-menu within the parent item, and use CSS to show it when a user mouses-over:

.menu-item:hover .drop-down-menu{
  display: block;
}

I line the menu-items in a row by setting display to ‘inline-block’. This is preferred over just ‘inline’, so that their height property is respected. This is important because I will create space between the parent item and sub-menu. If the two elements don’t actually overlap, then the hover state will be lost, closing the drop-down. See what I mean:

Drop down menu with CSS
The sub-menu element overlaps with the parent item, so that the hover state is not lost.

I also set the parent item position to relative, so that the drop-down will be absolutely positioned respective to it.

.menu-item{
  cursor: pointer;
  height: 50px;
  display: inline-block;
  position: relative;
}

Since the menu items have a height larger than the actual content, I apply borders to a child span within them:

.menu-item:first-child span{
  border:none;
}
.menu-item span{
  border-left: 1px solid black;
  padding: 0 10px;
}

The sub-menu styling is straight-forward. I set it to display: none, set a width, add a border and padding, and position it absolutely. Its top value pushes it off of the parent item a bit. Setting a left value to zero ensures that it will be aligned with its parent. (If you don’t set a left value, multiple sub-menus will all stack under the very first parent item.)

.drop-down-menu{
  display: none;
  width: 100px;
  position: absolute;
  background: white;
  border: 1px solid #301B46;
  padding: 20px;
  top: 40px;
  left: 0px;
}


The next step is building the arrow in the drop-down menu. The challenge is making the arrow’s border blend seamlessly with the container’s border. The illusion is achieved by overlapping the :before and :after pseudo-elements.

The triangle shape that forms the arrow is achieved by giving  a bottom border to an element with no height or width. This code pen animation does a phenomenal job of explaining the idea: https://codepen.io/chriscoyier/pen/lotjh

The drop-down’s :before element creates the white triangle that is the heart of the arrow itself. This also creates the gap in the sub-menu’s actual top border

The :after element creates another triangle that is behind and slightly above the first one – creating the illusion of a border that connects just right with the menu’s.

The illusion can be better revealed by manipulating the position and border-width of these pseudo-elements in the inspector.

Here is the code I used for those pseudo elements:

.drop-down-menu:before {
  content: "";
  position: absolute;
  border-color: rgba(194, 225, 245, 0);
  border: solid transparent;
  border-bottom-color: white;
  border-width: 11px;
  margin-left: -10px;
  top: -21px;
  right: 65px;
  z-index: 1;
} 

.drop-down-menu:after {
    content: "";
    position: absolute;
    right: 66px;
    top: -21px;
    width: 0;
    height: 0;
    border: solid transparent;
    border-width: 10px;
    border-bottom-color: #2B1A41;
    z-index: 0;
}

You can view the whole thing in action here: https://codepen.io/pacea87/pen/OJywqrj

Code generated from CSS Arrow Please helped me a lot when I was first figuring out how to do this right.

Create a WordPress plugin – How to

Splitwit plugin

Distributing software to app and plugin markets is a great way to gain organic traffic. Last year I submitted BJJ Tracker to the Google Play store as a Progressive Web App. Since then, I get signups every few days – with zero marketing effort.

I created a WordPress plugin for SplitWit, to grow its reach in a similar way. SplitWit helps run A/B experiments on the web. A JavaScript snippet needs to  be added to your code for it to work. This plugin injects the code snippet automatically.

Here is the process I took to develop and submit it to the WordPress plugin directory.

Plugin code

Since this is such a simple plugin, all I needed was one PHP file, and a readme.txt file. “At its simplest, a WordPress plugin is a PHP file with a WordPress plugin header comment.

The header comment defines meta-data:

/*

Plugin Name: SplitWit
Plugin URI: https://www.splitwit.com/
Description: This plugin automatically adds the SplitWit code snippet to your WordPress site. SplitWit lets you create a variation of your web page using our visual editor. It splits traffic between the original version and your variation. You can track key metrics, and let SplitWit determine a winner. No code needed. Go to SplitWit to register for free.
Author: SplitWit
Version: 1.0
License: GPLv2+
License URI: http://www.gnu.org/licenses/gpl-2.0.txt

*/

My PHP code defines six functions, uses four action hooks, and one filter hook.

The first function injects the SplitWit snippet code into the WordPress website’s header:

function splitwit_header_code(){
	//inject SplitWit code snippet
	 
	$splitwit_project_id = get_option('splitwit_project_id');
	 
	if($splitwit_project_id){
		wp_enqueue_script( 'splitwit-snippet', 'https://www.splitwit.com/snippet/'.$splitwit_project_id.'.js' );
	}
}
add_action( 'wp_head', 'splitwit_header_code', 1 );

Another defines the WordPress plugin’s menu page:

function splitwit_plugin_menu_page() { ?>

	<div>
		<h1>SplitWit Experiments</h1>
		<p>This plugin automatically adds the <a href="https://www.splitwit.com" target="_blank">SplitWit</a> code snippet to your WordPress site. <a href="https://www.splitwit.com" target="_blank">SplitWit</a> lets you create a variation of your web page using our visual editor. It splits traffic between the original version and your variation. You can track key metrics, and let SplitWit determine a winner. No code needed.
		</p>
		<p>You'll need to create an account at SplitWit.com - it's free. After signing up, you can create a new SplitWit project for your website. Find that project's ID code, and copy/paste it into this page.</p>
		
		<form method="post" action="options.php">
			<?php settings_fields( 'splitwit_settings_group' ); ?>
			<input style="width: 340px;display: block; margin-bottom: 10px;" type="text" name="splitwit_project_id" value="<?php echo get_option('splitwit_project_id'); ?>" />
			<input type="submit" class="button-primary" value="Save" />
		</form>
	</div>

<?php }

I add that menu page to the dashboard:

function splitwit_plugin_menu() {
	add_options_page('SplitWit Experiments', 'SplitWit Experiments', 'publish_posts', 'splitwit_settings', 'splitwit_plugin_menu_page');
}
add_action( 'admin_menu', 'splitwit_plugin_menu' );

And link to it in the Settings section of the dashboard:

function splitwit_link( $links ) {
     $links[] ='<a href="' . admin_url( 'options-general.php?page=splitwit_settings' ) .'">Settings</a>';
    return $links;
}
add_filter('plugin_action_links_'.plugin_basename(__FILE__), 'splitwit_link');

When the SplitWit code snippet is injected into the website’s header, it needs to reference a project ID. I register that value from the menu page:

function splitwit_settings(){
	register_setting('splitwit_settings_group','splitwit_project_id','string');
}
add_action( 'admin_init', 'splitwit_settings' );

If the project ID value has not been defined, I show a warning message at the top of the dashboard:

function splitwit_warning(){
  if (!is_admin()){
     return;
  }

  $splitwit_project_id = get_option("splitwit_project_id");
  if (!$splitwit_project_id || $splitwit_project_id < 1){
    echo "<div class='notice notice-error'><p><strong>SplitWit is missing a project ID code.</strong> You need to enter <a href='options-general.php?page=splitwit_settings'>a SplitWit project ID code</a> for the plugin to work.</p></div>";
  }
}
add_action( 'admin_notices','splitwit_warning');

The readme.txt defines additional meta-data. Each section corresponds to parts of the WordPress plugin directory page. The header section is required, and includes some basic fields that are parsed to the plugin page UI.

=== SplitWit ===
Contributors: SplitWit
Plugin Name: SplitWit
Plugin URI: https://www.splitwit.com
Tags: split test, split testing, ab testing, conversions
Requires at least: 2.8
Tested up to: 5.3.2
Stable tag: 1.0

Optimize your website for maximum convertibility. This plugin lets you use SplitWit to run experiments on your WordPress website.

I also added sections for a long description and installation instructions. Later, I included a screenshots section (see Subversion repo).

Submit for review

Plugin zip files can be uploaded to WordPress.org.  Plugins can also be distributed to WordPress users without this step – but having it listed in the WordPress directory lends credibility and visibility. After my initial submission, I received an email indicating issues with my code and requesting changes. The changes were simple: “use wp_enqueue commands” and “document use of an external service”.

Originally, my “splitwit_header_code()” function include the SplitWit JS snippet directly as plain text. I changed it to use the built-in function “wp_enqueue_script()”.

//wrong:
echo '<script type="text/javascript" async src="https://www.splitwit.com/snippet/'.$splitwit_project_id.'.js"> </script>';

//correct:
wp_enqueue_script( 'splitwit-snippet', 'https://www.splitwit.com/snippet/'.$splitwit_project_id.'.js' );

Next, they wanted me to disclose the use of SplitWit, the service that powers the plugin. I added this to my readme.txt:

This plugin relies on SplitWit, a third-party service. The SplitWit service adds code to your website to run A/B experiments. It also collects data about how users interact with your site, based on the metrics you configure.

After making these changes, I replied back with an updated .zip. A few days later I received approval. But, that wasn’t the end  – I still needed to upload my code to a WordPress.org hosted SVN repository.

Subversion Repo

I’ve used Git for versioning my entire career. I had heard of SVN, but never used it. What a great opportunity to learn!

The approval email provided me with a SVN URL. On my local machine, I created a new folder, “svn-wp-splitwit”. From a terminal, I navigated to this directory and checked out the pre-built repo:

svn co https://plugins.svn.wordpress.org/splitwit

I added my plugin files (readme.txt and splitwit.php) to the “trunk” folder. This is where the most up-to-date, ready-to-distribute, version of code belongs.

In the “tags” folder, I created a new directory called “1.0” and put a copy of my files there too – for the sake of version control. This step is completely optional and is how SVN handles revisions.

In the assets folder I included my banner, icon, and screenshot files. The filenames follow as prescribed by WordPress.org. I made sure to reference the screenshot files in my readme.txt file, under a new “Screenshots” section.

Finally, I pushed my code back up to the remote:

 svn ci -m "Initial commit of my plugin."

You can now find my plugin in the WordPress.org plugin directory. SplitWit is available for a free trial. Give it a try, and let me know what you think.

 


 

Pro-tip: Some WordPress setups won’t let you to install plugins from the dashboard with out providing FTP credentials, including a password. If you use a key file, instead of a password, this is a roadblock.

Install WordPress plugin roadblock
Not everyone uses a password to connect to their server.

You can remedy this by defining the file system connection method in your functions.php file:

define( 'FS_METHOD', 'direct' );

 

Secure a Website with SSL and HTTPS on AWS

Secure a website with SSL and HTTPS on AWS

My last post was about launching a website onto AWS. This covered launching a new EC2 instance, configuring a security group, installing LAMP software, and pointing a domain at the new instance. The only thing missing was to configure SSL and HTTPS.

Secure Sockets Layer (SSL) encrypts traffic between a website and its server. HTTPS is the protocol to deliver secured data via SSL to end-users.

In my last post, I already allowed all traffic through port 443 (the port that HTTPS uses) in the security group for my EC2 instance. Now I’ll install software to provision SSL certificates for the server.

Certbot

Certbot is free software that will communicate with Let’s Encrypt, an SSL certificate authority, to automate the management of encryption certificates.

Before downloading and installing Certbot, we’ll need to install some dependencies (Extra Packages for Enterprise Linux). SSH into the EC2 instance that you want to secure, and run this command in your home directory (/home/ec2-user):

sudo wget -r --no-parent -A 'epel-release-*.rpm' http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/

Then install it:

sudo rpm -Uvh dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-*.rpm

And enable it:

sudo yum-config-manager --enable epel*

Now, we’ll need to edit the Apache (our web hosting software) configuration file. Mine is located here: /etc/httpd/conf/httpd.conf

You can use the Nano CLI text editor to make changes to this file by running:

sudo nano /etc/httpd/conf/httpd.conf

Scroll down a bit, and you’ll find a line that says “Listen 80”. Paste these lines below (obviously, changing antpace.com to your own domain name)

<VirtualHost *:80>
    DocumentRoot "/var/www/html"
    ServerName "antpace.com"
    ServerAlias "www.antpace.com"
</VirtualHost>

Make sure you have an A record (via Route 53) for both yourwebsite.com AND www.yourwebsite.com with the value set as your EC2 public IP address.

After saving, you’ll need to restart the server software:

sudo systemctl restart httpd

Now we’re ready for Certbot. Install it:

sudo yum install -y certbot python2-certbot-apache

Run it:

sudo certbot

Follow the prompts as they appear.

Automatic renewal

Finally, schedule an automated task (a cron job) to renew the encryption certificate as needed. If you don’t do this part, HTTPS will fail for your website after a few months. Users will receive an ugly warning, telling them that your website is not secure. Don’t skip this part!

Run this command to open your cron file:

sudo nano /etc/crontab

Schedule Certbot to renew everyday, at 4:05 am:

05 4 * * * root certbot renew --no-self-upgrade

Make sure your cron daemon is running:

sudo systemctl restart crond

That’s it! Now your website, hosted on EC2 will support HTTPS. Next, we’ll force all traffic to use it.

* AWS Documentation Reference

Launching a website on AWS EC2

Using AWS for a website

In 2008 I deployed my first website to production. It used a simple LAMP stack , a GoDaddy domain name, and HostGator hosting.

Since 2016, I’ve used AWS as my primary cloud provider. And this year, I’m finally cancelling my HostGator package. Looking through that old server, I found artifacts of past projects – small businesses and start-ups that I helped develop and grow. A virtual memory lane.

Left on that old box was a site that I needed to move to a fresh EC2 instance. This is an opportunity to document how I launch a site to Amazon Web Services.

Amazon Elastic Compute Cloud

To start, I launch a new EC2 instance from the AWS console. Amazon’s Elastic Compute Cloud provides “secure and resizable compute capacity in the cloud.” When prompted to choose an Amazon Machine Image (AMI), I select “Amazon Linux 2 AMI”. I leave all other settings as default. When I finally click “Launch”, it’ll ask me to either generate a new key file, or use an existing one. I’ll need that file later to SSH or sFTP into this instance. A basic Linux server is spun up, with little else installed.

Linux AMI
Amazon Linux 2 AMI is free tier eligible.

Next, I make sure that instance’s Security Group allows inbound traffic on SSH, HTTP, and HTTPS. We allow all traffic via HTTP and HTTPS (IPv4 and IPv6, which is why there are 2 entries for each). That way end-users can reach the website from a browser. Inbound SSH access should not be left wide open. Only specific IP addresses should be allowed to command-line in to the server. AWS has an option labeled “My IP” that will populate it for your machine.

Inbound security rules
Don’t allow all IPs to access SSH in a live production environment.

Recent AWS UI updates let you set these common rules directly from the “Launch an instance” screen, under “Network settings”

AWS Network settings screen

Configure the server

Now that the hosting server is up-and-running, I can command-line in via SSH from my Mac’s terminal using the key file from before. This is what the command looks like:

 ssh -i /Users/apace/my-keys/keypair-secret.pem ec2-user@ec2-XXX-XXX-XX.us-east-2.compute.amazonaws.com

You might get a CLI error that says:

” It is required that your private key files are NOT accessible by others. This private key will be ignored. “

Bad permissions warning against a pem key file in the CLI

That just means you need to update the file permissions on the key file. You should do that from the command line, in the directory where the file resides:

chmod 400 KeyFileNameHere.pem

Make sure everything is up-to-date by running “sudo yum update“.  I begin installing the required software to host a website:

sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2

Note: amazon-linux-extras doesn’t exist on the Amazon Linux 2023 image.

That command gives me Apache, PHP, and MariaDB – a basic LAMP stack. This next one installs the database server:

sudo yum install -y httpd mariadb-server

MariaDB is a fork of the typical MySQL, but with better performance.

By default, MariaDB will not have any password set. If you choose to install phpMyAdmin, it won’t let you login without a password (as per a default setting). You’ll have to set a password from the command line. While connected to your instance via SSH, dispatch this command:

mysql -u root -p

When it prompts you for a password, just hit enter.

login to mariadb for the first time

Once you’re logged in, you need to switch to the mysql database by running the following command:

use mysql;

Now you can set a password for the root user with the following command:

UPDATE user SET password = PASSWORD('new_password') WHERE user = 'root';

After setting the password, you need to flush the privileges to apply the changes:

FLUSH PRIVILEGES;

Start Apache: “sudo systemctl start httpd“. And, make sure it always starts when the server boots up “sudo systemctl enable httpd

The server setup is complete. I can access an Apache test page from a web browser by navigating to the EC2 instance’s public IP address.

Apache test page
A test page shows when no website files are present.

I’ll take my website files (that are stored on my local machine and synched to a Git repo) and copy them to the server via sFTP.

Copy website files to server
I use FileZilla to access my EC2 public directory

I need to make sure the Linux user I sFTP with owns the directory “/var/www/html”, or else I’ll get a permission denied error:

sudo chown -R ec2-user /var/www/html

Later, if I want to be able to upload media to the server from the WordPress CMS, I’ll need to be sure to change the owner of the blog’s directory to the apache user (which is the behind-the-scenes daemon user invoked for such things):

sudo chown -R apache /var/www/html/blog

* AWS Documentation Reference

Domain name and Route 53

Instead of having to use the EC2 server’s public address to see my website from a browser, I’ll point a domain name at it. AWS Route 53 helps with this. It’s a “DNS web service” that routes users to websites by mapping domain names to IP addresses.

In Route 53 I create a new “hosted zone”, and enter the domain name that I’ll be using for this site.  This will automatically generate two record sets: a Name Server (NS) record and a Start-of-Authority (SOA) record. I’ll create one more, an IPv4 address (A) record. The value of that record should be the public IP address that I want my domain to point at. You’ll probably also want to add another, identical to the last one, but specifying “www” in the record name.

setting a record in AWS

Finally, I’ll head over to my domain name registrar, and find my domain name’s settings. I update the nameserver values there to match those in my Route 53 NS record set. It’ll likely take some time for this change to be reflected in the domain’s settings. Once that is complete, the domain name will be pointing at my new EC2 instance.

And that’s it – I’ve deployed my website to AWS. The next step is to secure my site by configuring SSL and HTTPS.

If you’re migrating a dynamic WordPress site, you’ll require a some additional steps to complete the migration.