Create a WordPress plugin – How to

Splitwit plugin

Distributing software to app and plugin markets is a great way to gain organic traffic. Last year I submitted BJJ Tracker to the Google Play store as a Progressive Web App. Since then, I get signups every few days – with zero marketing effort.

I created a WordPress plugin for SplitWit, to grow its reach in a similar way. SplitWit helps run A/B experiments on the web. A JavaScript snippet needs to  be added to your code for it to work. This plugin injects the code snippet automatically.

Here is the process I took to develop and submit it to the WordPress plugin directory.

Plugin code

Since this is such a simple plugin, all I needed was one PHP file, and a readme.txt file. “At its simplest, a WordPress plugin is a PHP file with a WordPress plugin header comment.

The header comment defines meta-data:

/*

Plugin Name: SplitWit
Plugin URI: https://www.splitwit.com/
Description: This plugin automatically adds the SplitWit code snippet to your WordPress site. SplitWit lets you create a variation of your web page using our visual editor. It splits traffic between the original version and your variation. You can track key metrics, and let SplitWit determine a winner. No code needed. Go to SplitWit to register for free.
Author: SplitWit
Version: 1.0
License: GPLv2+
License URI: http://www.gnu.org/licenses/gpl-2.0.txt

*/

My PHP code defines six functions, uses four action hooks, and one filter hook.

The first function injects the SplitWit snippet code into the WordPress website’s header:

function splitwit_header_code(){
	//inject SplitWit code snippet
	 
	$splitwit_project_id = get_option('splitwit_project_id');
	 
	if($splitwit_project_id){
		wp_enqueue_script( 'splitwit-snippet', 'https://www.splitwit.com/snippet/'.$splitwit_project_id.'.js' );
	}
}
add_action( 'wp_head', 'splitwit_header_code', 1 );

Another defines the WordPress plugin’s menu page:

function splitwit_plugin_menu_page() { ?>

	<div>
		<h1>SplitWit Experiments</h1>
		<p>This plugin automatically adds the <a href="https://www.splitwit.com" target="_blank">SplitWit</a> code snippet to your WordPress site. <a href="https://www.splitwit.com" target="_blank">SplitWit</a> lets you create a variation of your web page using our visual editor. It splits traffic between the original version and your variation. You can track key metrics, and let SplitWit determine a winner. No code needed.
		</p>
		<p>You'll need to create an account at SplitWit.com - it's free. After signing up, you can create a new SplitWit project for your website. Find that project's ID code, and copy/paste it into this page.</p>
		
		<form method="post" action="options.php">
			<?php settings_fields( 'splitwit_settings_group' ); ?>
			<input style="width: 340px;display: block; margin-bottom: 10px;" type="text" name="splitwit_project_id" value="<?php echo get_option('splitwit_project_id'); ?>" />
			<input type="submit" class="button-primary" value="Save" />
		</form>
	</div>

<?php }

I add that menu page to the dashboard:

function splitwit_plugin_menu() {
	add_options_page('SplitWit Experiments', 'SplitWit Experiments', 'publish_posts', 'splitwit_settings', 'splitwit_plugin_menu_page');
}
add_action( 'admin_menu', 'splitwit_plugin_menu' );

And link to it in the Settings section of the dashboard:

function splitwit_link( $links ) {
     $links[] ='<a href="' . admin_url( 'options-general.php?page=splitwit_settings' ) .'">Settings</a>';
    return $links;
}
add_filter('plugin_action_links_'.plugin_basename(__FILE__), 'splitwit_link');

When the SplitWit code snippet is injected into the website’s header, it needs to reference a project ID. I register that value from the menu page:

function splitwit_settings(){
	register_setting('splitwit_settings_group','splitwit_project_id','string');
}
add_action( 'admin_init', 'splitwit_settings' );

If the project ID value has not been defined, I show a warning message at the top of the dashboard:

function splitwit_warning(){
  if (!is_admin()){
     return;
  }

  $splitwit_project_id = get_option("splitwit_project_id");
  if (!$splitwit_project_id || $splitwit_project_id < 1){
    echo "<div class='notice notice-error'><p><strong>SplitWit is missing a project ID code.</strong> You need to enter <a href='options-general.php?page=splitwit_settings'>a SplitWit project ID code</a> for the plugin to work.</p></div>";
  }
}
add_action( 'admin_notices','splitwit_warning');

The readme.txt defines additional meta-data. Each section corresponds to parts of the WordPress plugin directory page. The header section is required, and includes some basic fields that are parsed to the plugin page UI.

=== SplitWit ===
Contributors: SplitWit
Plugin Name: SplitWit
Plugin URI: https://www.splitwit.com
Tags: split test, split testing, ab testing, conversions
Requires at least: 2.8
Tested up to: 5.3.2
Stable tag: 1.0

Optimize your website for maximum convertibility. This plugin lets you use SplitWit to run experiments on your WordPress website.

I also added sections for a long description and installation instructions. Later, I included a screenshots section (see Subversion repo).

Submit for review

Plugin zip files can be uploaded to WordPress.org.  Plugins can also be distributed to WordPress users without this step – but having it listed in the WordPress directory lends credibility and visibility. After my initial submission, I received an email indicating issues with my code and requesting changes. The changes were simple: “use wp_enqueue commands” and “document use of an external service”.

Originally, my “splitwit_header_code()” function include the SplitWit JS snippet directly as plain text. I changed it to use the built-in function “wp_enqueue_script()”.

//wrong:
echo '<script type="text/javascript" async src="https://www.splitwit.com/snippet/'.$splitwit_project_id.'.js"> </script>';

//correct:
wp_enqueue_script( 'splitwit-snippet', 'https://www.splitwit.com/snippet/'.$splitwit_project_id.'.js' );

Next, they wanted me to disclose the use of SplitWit, the service that powers the plugin. I added this to my readme.txt:

This plugin relies on SplitWit, a third-party service. The SplitWit service adds code to your website to run A/B experiments. It also collects data about how users interact with your site, based on the metrics you configure.

After making these changes, I replied back with an updated .zip. A few days later I received approval. But, that wasn’t the end  – I still needed to upload my code to a WordPress.org hosted SVN repository.

Subversion Repo

I’ve used Git for versioning my entire career. I had heard of SVN, but never used it. What a great opportunity to learn!

The approval email provided me with a SVN URL. On my local machine, I created a new folder, “svn-wp-splitwit”. From a terminal, I navigated to this directory and checked out the pre-built repo:

svn co https://plugins.svn.wordpress.org/splitwit

I added my plugin files (readme.txt and splitwit.php) to the “trunk” folder. This is where the most up-to-date, ready-to-distribute, version of code belongs.

In the “tags” folder, I created a new directory called “1.0” and put a copy of my files there too – for the sake of version control. This step is completely optional and is how SVN handles revisions.

In the assets folder I included my banner, icon, and screenshot files. The filenames follow as prescribed by WordPress.org. I made sure to reference the screenshot files in my readme.txt file, under a new “Screenshots” section.

Finally, I pushed my code back up to the remote:

 svn ci -m "Initial commit of my plugin."

You can now find my plugin in the WordPress.org plugin directory. SplitWit is available for a free trial. Give it a try, and let me know what you think.

 


 

Pro-tip: Some WordPress setups won’t let you to install plugins from the dashboard with out providing FTP credentials, including a password. If you use a key file, instead of a password, this is a roadblock.

Install WordPress plugin roadblock
Not everyone uses a password to connect to their server.

You can remedy this by defining the file system connection method in your functions.php file:

define( 'FS_METHOD', 'direct' );

 

Secure a Website with SSL and HTTPS on AWS

Secure a website with SSL and HTTPS on AWS

My last post was about launching a website onto AWS. This covered launching a new EC2 instance, configuring a security group, installing LAMP software, and pointing a domain at the new instance. The only thing missing was to configure SSL and HTTPS.

Secure Sockets Layer (SSL) encrypts traffic between a website and its server. HTTPS is the protocol to deliver secured data via SSL to end-users.

In my last post, I already allowed all traffic through port 443 (the port that HTTPS uses) in the security group for my EC2 instance. Now I’ll install software to provision SSL certificates for the server.

Certbot

Certbot is free software that will communicate with Let’s Encrypt, an SSL certificate authority, to automate the management of encryption certificates.

Before downloading and installing Certbot, we’ll need to install some dependencies (Extra Packages for Enterprise Linux). SSH into the EC2 instance that you want to secure, and run this command in your home directory (/home/ec2-user):

sudo wget -r --no-parent -A 'epel-release-*.rpm' http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/

Then install it:

sudo rpm -Uvh dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-*.rpm

And enable it:

sudo yum-config-manager --enable epel*

Now, we’ll need to edit the Apache (our web hosting software) configuration file. Mine is located here: /etc/httpd/conf/httpd.conf

You can use the Nano CLI text editor to make changes to this file by running:

sudo nano /etc/httpd/conf/httpd.conf

Scroll down a bit, and you’ll find a line that says “Listen 80”. Paste these lines below (obviously, changing antpace.com to your own domain name)

<VirtualHost *:80>
    DocumentRoot "/var/www/html"
    ServerName "antpace.com"
    ServerAlias "www.antpace.com"
</VirtualHost>

Make sure you have an A record (via Route 53) for both yourwebsite.com AND www.yourwebsite.com with the value set as your EC2 public IP address.

After saving, you’ll need to restart the server software:

sudo systemctl restart httpd

Now we’re ready for Certbot. Install it:

sudo yum install -y certbot python2-certbot-apache

Run it:

sudo certbot

Follow the prompts as they appear.

Automatic renewal

Finally, schedule an automated task (a cron job) to renew the encryption certificate as needed. If you don’t do this part, HTTPS will fail for your website after a few months. Users will receive an ugly warning, telling them that your website is not secure. Don’t skip this part!

Run this command to open your cron file:

sudo nano /etc/crontab

Schedule Certbot to renew everyday, at 4:05 am:

05 4 * * * root certbot renew --no-self-upgrade

Make sure your cron daemon is running:

sudo systemctl restart crond

That’s it! Now your website, hosted on EC2 will support HTTPS. Next, we’ll force all traffic to use it.

* AWS Documentation Reference

Launching a website on AWS EC2

Using AWS for a website

In 2008 I deployed my first website to production. It used a simple LAMP stack , a GoDaddy domain name, and HostGator hosting.

Since 2016, I’ve used AWS as my primary cloud provider. And this year, I’m finally cancelling my HostGator package. Looking through that old server, I found artifacts of past projects – small businesses and start-ups that I helped develop and grow. A virtual memory lane.

Left on that old box was a site that I needed to move to a fresh EC2 instance. This is an opportunity to document how I launch a site to Amazon Web Services.

Amazon Elastic Compute Cloud

To start, I launch a new EC2 instance from the AWS console. Amazon’s Elastic Compute Cloud provides “secure and resizable compute capacity in the cloud.” When prompted to choose an Amazon Machine Image (AMI), I select “Amazon Linux 2 AMI”. I leave all other settings as default. When I finally click “Launch”, it’ll ask me to either generate a new key file, or use an existing one. I’ll need that file later to SSH or sFTP into this instance. A basic Linux server is spun up, with little else installed.

Linux AMI
Amazon Linux 2 AMI is free tier eligible.

Next, I make sure that instance’s Security Group allows inbound traffic on SSH, HTTP, and HTTPS. We allow all traffic via HTTP and HTTPS (IPv4 and IPv6, which is why there are 2 entries for each). That way end-users can reach the website from a browser. Inbound SSH access should not be left wide open. Only specific IP addresses should be allowed to command-line in to the server. AWS has an option labeled “My IP” that will populate it for your machine.

Inbound security rules
Don’t allow all IPs to access SSH in a live production environment.

Recent AWS UI updates let you set these common rules directly from the “Launch an instance” screen, under “Network settings”

AWS Network settings screen

Configure the server

Now that the hosting server is up-and-running, I can command-line in via SSH from my Mac’s terminal using the key file from before. This is what the command looks like:

 ssh -i /Users/apace/my-keys/keypair-secret.pem ec2-user@ec2-XXX-XXX-XX.us-east-2.compute.amazonaws.com

You might get a CLI error that says:

” It is required that your private key files are NOT accessible by others. This private key will be ignored. “

Bad permissions warning against a pem key file in the CLI

That just means you need to update the file permissions on the key file. You should do that from the command line, in the directory where the file resides:

chmod 400 KeyFileNameHere.pem

Make sure everything is up-to-date by running “sudo yum update“.  I begin installing the required software to host a website:

sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2

Note: amazon-linux-extras doesn’t exist on the Amazon Linux 2023 image.

That command gives me Apache, PHP, and MariaDB – a basic LAMP stack. This next one installs the database server:

sudo yum install -y httpd mariadb-server

MariaDB is a fork of the typical MySQL, but with better performance.

By default, MariaDB will not have any password set. If you choose to install phpMyAdmin, it won’t let you login without a password (as per a default setting). You’ll have to set a password from the command line. While connected to your instance via SSH, dispatch this command:

mysql -u root -p

When it prompts you for a password, just hit enter.

login to mariadb for the first time

Once you’re logged in, you need to switch to the mysql database by running the following command:

use mysql;

Now you can set a password for the root user with the following command:

UPDATE user SET password = PASSWORD('new_password') WHERE user = 'root';

After setting the password, you need to flush the privileges to apply the changes:

FLUSH PRIVILEGES;

Start Apache: “sudo systemctl start httpd“. And, make sure it always starts when the server boots up “sudo systemctl enable httpd

The server setup is complete. I can access an Apache test page from a web browser by navigating to the EC2 instance’s public IP address.

Apache test page
A test page shows when no website files are present.

I’ll take my website files (that are stored on my local machine and synched to a Git repo) and copy them to the server via sFTP.

Copy website files to server
I use FileZilla to access my EC2 public directory

I need to make sure the Linux user I sFTP with owns the directory “/var/www/html”, or else I’ll get a permission denied error:

sudo chown -R ec2-user /var/www/html

Later, if I want to be able to upload media to the server from the WordPress CMS, I’ll need to be sure to change the owner of the blog’s directory to the apache user (which is the behind-the-scenes daemon user invoked for such things):

sudo chown -R apache /var/www/html/blog

* AWS Documentation Reference

Domain name and Route 53

Instead of having to use the EC2 server’s public address to see my website from a browser, I’ll point a domain name at it. AWS Route 53 helps with this. It’s a “DNS web service” that routes users to websites by mapping domain names to IP addresses.

In Route 53 I create a new “hosted zone”, and enter the domain name that I’ll be using for this site.  This will automatically generate two record sets: a Name Server (NS) record and a Start-of-Authority (SOA) record. I’ll create one more, an IPv4 address (A) record. The value of that record should be the public IP address that I want my domain to point at. You’ll probably also want to add another, identical to the last one, but specifying “www” in the record name.

setting a record in AWS

Finally, I’ll head over to my domain name registrar, and find my domain name’s settings. I update the nameserver values there to match those in my Route 53 NS record set. It’ll likely take some time for this change to be reflected in the domain’s settings. Once that is complete, the domain name will be pointing at my new EC2 instance.

And that’s it – I’ve deployed my website to AWS. The next step is to secure my site by configuring SSL and HTTPS.

If you’re migrating a dynamic WordPress site, you’ll require a some additional steps to complete the migration.

React JS & Yup: only require an input, if another is not empty

React JS and Yup

Typically, I avoid using JS app frameworks, and default to plain vanilla JavaScript. But, in keeping up with what is current – and working on projects as part of a team – React is inevitable: “A JavaScript library for building user interfaces” . Yup is the go-to form validation library in this context. Its GitHub page calls it “Dead simple Object schema validation”.

Yup creates validation schemas for inputs. Create a Yup validation object, and wire it up to a form – easy.

The ask

Setting: An existing React project, with a signup form. The form includes address inputs. The “country” input was not a required field – it could be left blank. My assignment was to make that field be required, only if the “state/territory” input was not empty. Sounds straight forward.

Here is a sample of the original code:

export const apValidateMyAddress = () => {
  name: yup.string().required("Don't leave this blank!!"),
  email: yup.string().email(),
  address: yup:string(),
  city: yup.string(),
  state: yup.string(),
  country: yup.string()
}

At first, I wasn’t sure if I should update this schema code directly. I thought about checking if the state field was blank, or not, and applying a different schema object instead. That would have been the wrong approach.

Doing some research, I discovered that the Yup’s when() method was the solution. It would let me “adjust the schema based on a sibling or sibling children fields”.

My first attempt was wrong, and didn’t work::

export const apValidateMyAddress = () => {
  name: yup.string().required("Don't leave this blank!!"),
  email: yup.string().email(),
  address: yup:string(),
  city: yup.string(),
  state: yup.string(),
  country: yup.string().when('state',{
    is: true,
    then: yup.string().required('This is a required field.')
  })
}

Errors were thrown. Documentation examples were hard to come by, and I was new at this. I wanted the condition to be true if “state” was not blank. Setting the “is” clause as “true” would only work if state was validated as a boolean – state: yup.boolean() . Ultimately, I was able to check that the “state” value existed using the value property:

export const apValidateMyAddress = () => {
  name: yup.string().required("Don't leave this blank!!"),
  email: yup.string().email(),
  address: yup:string(),
  city: yup.string(),
  state: yup.string(),
  country: yup.string().when('state',{
    is: (value: any) => !!value,
    then: yup.string().required('This is a required field.')
  })
}

More Conditional Logic

In another example of validating a field based on the value of another, I leveraged the context attribute. This allows you to pass additional data to the validation schema.

const validOrderQuanitity = await orderQuantitySchema.validate(orderQuantity, {context: previousOrderQuantity, customerType})

Here, for my order quantity to be valid, it needs to be greater than (or equal to) the previous order quantity, but only when the customer type is “existing”. Although the order quantity is what is being validated, I need the context of the previous order quantity and the customer type.

In my validation schema, I use a when condition to check the customer type and to reference the passed argument:

export const myValidationSchema = Yup.number().when('$customerType',{ is: 'existing', then: Yup.number().min(Yup.ref('$previousOrderQuantity'), "Invalid!")})

Sending email from your app using AWS SES

amazon ses

Simple Email Service (SES) from AWS

Email is the best way that we can communicate with our users; still better than SMS or app notifications. An effective messaging strategy can enhance the journey our products offer.

This post is about sending email from the website or app you’re developing. We will use SES to send transactional emails. AWS documentation describes Simple Email Service (SES) as “an email sending and receiving service that provides an easy, cost-effective way for you to send email.” It abstracts away managing a mail server.

Identity verification

When you first get set up, SES will be in sandbox mode. That means you can only send email to verified receivers. To get production access and start sending emails to the public, you will need to verify an email address and a sending domain.

Configuring your domain name

Sending email through SES requires us to verify the domain name that messages will be coming from. We can do this from the “Domains” dashboard.

Verify a new domain name
Verify a new domain name

This will generate a list of record sets that will need to be added to our domain as DNS records. I use Route 53, another Amazon service, to manage my domains – so that’s where I’ll need to enter this info. If this is a new domain that you are working with, you will need to create a “hosted zone”in AWS for it first.

AWS Route 53

As of this update, Amazon recommends using CNAME DKIM (DomainKeys Identified Mail) records instead of TXT records to authenticate your domain. These signatures enhance the deliverability of your mail with DKIM-compliant email providers. If your domain name is in Route 53, SES will automatically import the CNAME records for you.

warning message about legacy TXT records

Understand deliverability

We want to be confident that intended recipients are actually getting the messages that are sent.  Email service providers, and ISPs, want to prevent being abused by spammers. Following best practices, and understanding deliverability, can ensure that emails won’t be blocked.

Verify any email addresses that you are sending messages from: “To maintain trust between email providers and Amazon SES, Amazon SES needs to ensure that its senders are who they say they are.”

You should use an email address that is at the domain you verified. To host business email, I suggest AWS WorkMail or Google Workspace

verify a new email in AWS

Make sure DKIM has been verified for your domain:  “DomainKeys Identified Mail (DKIM) provides proof that the email you send originates from your domain and is authentic”. If you’re already using Route 53 to manage your DNS records, SES will present an option to automatically create the necessary records.

Route 53 DKIM records

Be reputable. Send high quality emails and make opt-out easy. You don’t want to be marked as spam. Respect sending quotas. If you’re plan on sending bulk email to a list-serve, I suggest using an Email Service Provider such as MailChimp (SES could be used for that too, but is outside the scope of this writing).

Sending email

SES can be used three ways: either by API, the SMTP interface, or the console. Each method lists different ways to authenticate. “To interact with [Amazon SES], you use security credentials to verify who you are and whether you have permission to interact with Amazon SES.” Next, we will use the API credentials – an access key ID and secret access key.

Create an access key pair

An access key can be created using Identity and Access Management (IAM). You use access keys to sign programmatic requests that you make to AWS.” This requires creating a user, and setting its permissions policies to include “AmazonSESSendingAccess”. We can create an access key in the “security credentials” for this user.

Permission policy for IAM user
Permission policy for IAM user

Integrating with WordPress

Sending email from WordPress is made easy with plugins. They can be used to easily create forms. Those forms can be wired to use the outbound mail server of our choice using WP Mail SMTP Pro. All we’ll need to do is enter the access key details. If we try to send email without specifying a mail server, forms will default to sending messages directly from the LAMP box hosting the website. That would result in low-to-no deliverability.

Screenshot of WP Mail SMTP Pro
Screenshot of WP Mail SMTP Pro

As of this update, the above plugin now only provides the “Amazon SES” option with a premium (not free) monthly subscription. That’s OK, because we can still use Amazon SES through the “Other SMTP” option.

SMTP Username and Password

The “Other SMTP” option asks for a username and password. You can create those credentials from Amazon SES by going to “SMTP Settings”. When you click “Create SMTP credentials” you will be redirected to the IAM service to create a user with the details already filled

creating a new IAM user for SMTP

It will give you the SMTP user name (different than the IAM user name) and password on the following screen. After you add these details to the plugin, any emails sent from this WordPress instance will use SES as the mail server. As a use case, I create forms with another plugin called “Contact Form 7”. Any emails sent through these forms will use the above set up.

Integrating with custom code

Although the WordPress option is simple, the necessary plugin has an annual cost. Alternatively, SES can integrate with custom code we’ve written. We can use PHPMailer to abstract away the details of sending email programmatically. Just include the necessary files, configure some variables, and call a send() method.

Contact form powered by SES
Contact form powered by SES

The contact forms on my résumé  and portfolio webpages use this technique. I submit the form data to a PHP file that uses PHPMailer to interact with SES. The front-end uses a UI notification widget to give the user alerts. It’s available on my GitHub, so check it out.

Front-end, client-side:

<form id="contactForm">
    <div class="outer-box">
      
        <input type="text" placeholder="Name" name="name" value="" class="input-block-level bordered-input">
        <input type="email" placeholder="Email" value="" name="email" class="input-block-level bordered-input">
        <input type="text" placeholder="Phone" value="" name="phone" class="input-block-level bordered-input">
       
        <textarea placeholder="Message" rows="3" name="message" id="contactMessage" class="input-block-level bordered-input"></textarea>
        <button type="button" id="contactSubmit" class="btn transparent btn-large pull-right">Contact Me</button>
    </div>
</form>
<link rel="stylesheet" type="text/css" href="/ui-messages/css/ui-notifications.css"> 
<script src="/ui-messages/js/ui-notifications.js"></script>
<script type="text/javascript">
$(function(){
	var notifications = new UINotifications();
	$("#contactSubmit").click(function(){
		var contactMessage = $("#contactMessage").val();
		if(contactMessage < 1){
			notifications.showStatusMessage("Don't leave the message area empty.");
			return;
		}
		var data = $("#contactForm").serialize();
		$.ajax({
			type:"POST",
			data:data,
			url:"assets/contact.php",
			success:function(response){
				console.log(response);
				notifications.showStatusMessage("Thanks for your message. I'll get back to you soon.");
				$("form input, form textarea").val("");					
			}
			
		});
	});
});
</script>

In the PHP file,  we set the username and password as the access key ID and access key secret. Make sure the region variable matches what you’re using in AWS. #TODO: It would be best practice to record the message to a database. (The WordPress plugin from earlier handles that out-of-the-box). We might also send an additional email to the user, letting them know their note was received.

Back-end, server-side:

<?php
//send email via amazon ses
use PHPMailer\PHPMailer\PHPMailer;
use PHPMailer\PHPMailer\Exception;

$name = "";
$email = "";
$phone = "";
$message = "";

if(isset($_POST["name"])){
	$name = $_POST["name"];
}
if(isset($_POST["email"])){
	$email = $_POST["email"];
}
if(isset($_POST["phone"])){
	$phone = $_POST["phone"];
}
if(isset($_POST["message"])){
	$message = $_POST["message"];
}

$region = "us-east-1"
$aws_key_id = "xxx"
$aws_key_secret = "xxx"

require '/var/www/html/PHPMailer/src/Exception.php';
require '/var/www/html/PHPMailer/src/PHPMailer.php';
require '/var/www/html/PHPMailer/src/SMTP.php';
// // Instantiation and passing `true` enables exceptions
$mail = new PHPMailer(true);
try {
	if(strlen($message) > 1){
    //Server settings
	    $mail->SMTPDebug = 2;                                       // Enable verbose debug output
	    $mail->isSMTP();                                            // Set mailer to use SMTP
	    $mail->Host       = 'email-smtp.' . $region . '.amazonaws.com';  // Specify main and backup SMTP servers
	    $mail->SMTPAuth   = true;                                   // Enable SMTP authentication
	    $mail->Username   = $aws_key_id;                     // access key ID
	    $mail->Password   = $aws_key_secret;                               // AWS Key Secret
	    $mail->SMTPSecure = 'tls';                                  // Enable TLS encryption, `ssl` also accepted
	    $mail->Port       = 587;                                    // TCP port to connect to

	    //Recipients
	    $mail->setFrom('XXX@antpace.com', 'Portfolio');
	    $mail->addAddress("XXX@antpace.com");     // Add a recipient
	    $mail->addReplyTo('XXX@antpace.com', 'Portfolio');

	    // Content
	    $mail->isHTML(true);                                  // Set email format to HTML
	  
	    $mail->Subject = 'New message from your portfolio page.';
	    $mail->Body    = "This message was sent from: $name - $email - $phone \n Message: $message";
	    $mail->AltBody = "This message was sent from: $name - $email - $phone \n Message: $message";
	    
	    $mail->send();
	    echo 'Message has been sent';
	}
    
} catch (Exception $e) {
    echo "Message could not be sent. Mailer Error: {$mail->ErrorInfo}";
}

?>

The technical side of sending email from software is straight-forward. The strategy can be fuzzy and requires planning. Transactional emails have an advantage over marketing emails. Since they are triggered by a user’s action, they have more meaning. They have higher open rates, and in that way afford an opportunity.

How can we optimize the usefulness of these emails? Be sure to create a recognizable voice in your communication that resonates your brand. Provide additional useful information, resources, or offers. These kind of emails are an essential part of the user experience and your product’s development.

You can find the code for sending emails this way on my GitHub.

 

Managing content for an art website

managing content as data

Content Management

Any product, experience, or artwork – anything you build – is made up of pieces. And content always sits at the center. Content is the fleshy part of media.

The other pieces include structure, style, and functionality.  These parts layout a skeleton, decorates the aesthetic, and adds usefulness. This  model translates well to modern web development. HTML defines the structure. CSS describes the style. JavaScript adds interactivity. But always, content is King.

That’s why a robust content management system (CMS) is critical. Most clients prefer to have one. It makes updates easy. WordPress is the modern choice. It’s what this blog is built on.

WordPress Website

A website I built featured the work of visual artist Ron Markman – paintings, etchings, photos. It had a lot of content. A lot of content that needed massaging. As you may have guessed, I chose WordPress to manage it. I choose the HTML5 Blank WordPress Theme as our starting point.

I was recommended to Ericka by a previous client. Her late relative left behind a corpus of work that needed a  new digital home. They already had a website, but needed it revamped and rebuilt from scratch.

ron markman old website

This was my proposal:

The look and feel will be modern, sleek, and adaptive. The homepage will feature a header section highlighting selected work. The website’s menu will link to the various category pages (as well as any ancillary pages). The menu, along with a footer, will persist throughout all pages to create a cohesive experience and brand identity. The website will be built responsively, adapting to all screen-sizes and devices. As discussed, select content will feature “zooming” functionality.”

art website

This was a situation where I had to be a project manager, and deliver results. Although the content itself was impressive, it was delivered as image files in various formats and different sizes. Filenames were not consistent. And the meta-data – descriptions, titles, notes – was listed is excel files that didn’t always match-up to the image’s filename. This required a lot of spot checking, and manual work. I did my best to automate as much as I could, and make things uniform.

Phases of Work

I broke the work out into four phases. This is how I described it to the client:

Layout and hierarchy

I will provide wire-frame layouts describing the essential structure, layout and hierarchy of the website overall.

Look and feel

I will finalize aesthetic details such as color palette, typography, user interface, and stylization.

Implementation

I will build and deploy the website with the content provided.

Content input

You’ll need to provide all copy, images, media, etc. before a first build of the website can be deployed. I’ll be implementing a standard content-management system that will allow you to add additional content, categories, pages, etc. Often times, content delivery can be a bottleneck for projects like this. After the finalized website is deployed live, with the initial content provided, you’ll be responsible for adding any more additionally.

Image Gallery

The UI to show each piece of art was powered by Galleria. It was out-of-the-box responsive. At first, each gallery page had so many large image files that things would crash or load very slowly. I was able to leverage the framework’s AJAX lazy loading to mitigate that issue.

Resizing Multiple Images

Resizing a batch of images can be done directly in Mac OS by selecting the files, and opening them in Preview. From the ‘Edit’ menu, I clicked ‘Select All’. Then, in the ‘Tool’ menu I found ‘Adjust Size’. Windows has a similar feature, as does other image manipulation apps.

Renaming Multiple Files

I had to make the filenames match what was listed in the meta-data spreadsheet. Here’s the command I used, in Mac OS, to truncate filenames to the first eight characters:

rename -n 's/(.{8}).*(\.jpg)$/$1$2/' *.jpg

Batch Uploading WordPress Posts

Each piece of art was a WordPress post, with a different title, meta-values, and image. Once all of the files were sized and named properly, I uploaded them to the server via sFTP. Each category of art (paintings, photos, etc.) was a folder. I created a temporary database table that matched the columns from the meta-data spreadsheet I was given.

CREATE TABLE `content` (
  `content_id` int,
  `title` varchar(250) NOT NULL,
  `medium` varchar(250) NOT NULL,
  `category_id` varchar(250) NOT NULL,
  `size` varchar(250) NOT NULL,
  `date` varchar(250) NOT NULL,
  `filename` varchar(100) NOT NULL,
  `processed` int
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
COMMIT;

I wrote a PHP script that would loop through all records, and create a new post for each. I had to make sure to include core WordPress functionality,  so that I would be able to use the wp_insert_post() method.

require_once('/var/www/html/wp-load.php');

Once I connected to the database, I queried my temporary table, excluding any records that have been marked as already uploaded:

$query = "SELECT * FROM `content` where `processed` != 1"; 
$result = mysqli_query($mysql_link, $query);

While looping through each record, I would look up the WordPress category ID and slug based on the provided category name. This would allow my code to assign the post to the correct category, and to know which folder the image file was in. Once the post is inserted, I take that post ID and assign meta-values. At the end of the loop, I mark this record as processed.

while ($row = mysqli_fetch_assoc($result)) {

    $category = $row['category'];
    $content_id = $row['content_id'];
    $term_id = "";
    $slug = "";
    $category_query = $mysqli->prepare("SELECT * FROM `wp_terms` where name = :name");
    $category_query->bind_param(array(':name' => $category));
    $category_result = $category_query->execute();
    if (mysqli_num_rows($category_result) > 0) {
        while($cat_row = mysqli_fetch_assoc($category_result)) {
           $term_id = $cat_row['term_id'];
           $slug = $cat_row['slug'];
        }
    }
    $post_id = wp_insert_post(array(
        'post_status' => 'publish',
        'post_title' => $row['title'],
        'post_content' => " ",
        'post_category' => $term_id
        
    ));
    
    if ($post_id) { 
        //meta-values
        add_post_meta($post_id, 'medium', $row['medium']);
        add_post_meta($post_id, 'size', $row['size']);
        add_post_meta($post_id, 'date', $row['date']);
        $img = $slug . $row['image'];
        add_post_meta($post_id, 'image_file', $img);
    }

    $update = $mysqli->prepare("UPDATE `content` SET processed = 1 where content_id = :content_id");
    $update->bind_param(array(':content_id' => $content_id));
    $update = $category_query->execute(); 
}

Managing clients, and their content, can be the most challenging  part of web development. Using the right software for the job makes it easier. So does having a toolbox of techniques, and being clever.

Managing server operations

This website was hosted on AWS. It used EC2. At first, the instance size we selected was too small and led to repeated website crashes. That experience led me to coming up with a hacky work-around for restarting the server when it crashed – read about it here.

Image carousel – update

Image carousel update

In a previous post, I wrote about creating an image carousel using basic web tech: HTML, CSS, and vanilla JavaScript. No frameworks, no jQuery. This is an update to that. The major difference is that it supports multiple carousels on the same page. I also added a try/catch, in case no carousel data is found in the database. I recently used this implementation on a WordPress site. Each carousel was a post (of a custom carousel post-type), that had each image attached. On that post-type archive page, I looped through the posts, and created a separate carousel for each.

Here is the updated JavaScript.

try{
    var galleries = document.getElementsByClassName("carousel-class");
    for(var i = 0; i < galleries.length; i++){
        showGalleries(galleries.item(i), 0);    
    }
}catch(e){
    console.log(e);
}

function showGalleries(gallery, galleryIndex){
    var galleryDots = gallery.getElementsByClassName("dot-button");
    var gallerySlides = gallery.getElementsByClassName("my-slide");
    if (galleryIndex < 0){galleryIndex = gallerySlides.length-1}
    galleryIndex++;
    for(var ii = 0; ii < gallerySlides.length; ii++){ gallerySlides[ii].style.display = "none"; galleryDots[ii].classList.remove('active-dot'); } if (galleryIndex > gallerySlides.length){galleryIndex = 1}
    gallerySlides[galleryIndex-1].style.display = "block";
    var resizeEvent = new Event('resize');
    window.dispatchEvent(resizeEvent);
    galleryDots[galleryIndex-1].classList.add('active-dot');
    //hide gallery navigation, if there is only 1
    if(gallerySlides.length < 2){
        var dotContainer = gallery.getElementsByClassName("dots");
        var arrowContainer = gallery.getElementsByClassName("gallery-arrows");
        dotContainer[0].style.display = "none";
        arrowContainer[0].style.display = "none";
    }
    gallery.setAttribute("data", galleryIndex);
}

//gallery dots
document.addEventListener('click', function (event) {
    if (!event.target.matches('.carousel-class .dot-button')){ return; }
    var index = event.target.getAttribute("data"); 
    var parentGallery = event.target.closest(".carousel-class")
    showGalleries(parentGallery, index);

}, false);

//gallery arrows

//left arrow
document.addEventListener('click', function (event) {
    if (!event.target.matches('.fa-arrow-left')){ return; }
    var parentGallery = event.target.closest(".carousel-class")
    var galleryIndex = parentGallery.getAttribute("data");
    galleryIndex = galleryIndex - 2;
    
    showGalleries(parentGallery, galleryIndex);
}, false);

//right arrow
document.addEventListener('click', function (event) {
    if (!event.target.matches('.fa-arrow-right')){ return; }
    var parentGallery = event.target.closest(".carousel-class")
    var galleryIndex = parentGallery.getAttribute("data");

    showGalleries(parentGallery, galleryIndex);
}, false);

You’ll notice that each carousel section has a data attribute assigned, so our JS knows which one to affect. This version also includes left and right navigation arrows, in addition to the navigation dots we already had.

HTML:

<div class="ap-carousel" data="0">

<?php $num_slides = 0; foreach($posts as $post){ $num_slides++; ?>

	<div class="ap-slide">
		<a href="<?php the_permalink($post->ID); ?>" title="<?php the_title(); ?>">
			<img src="<?php echo esc_url(get_the_post_thumbnail_url($post->ID)); ?>" class="zoom">
		</a>
	</div>

<?php } ?>
<div class="nav-dots">
	<?php $active = "active-dot"; for($x = 0; $x < $num_slides; $x++){ ?>
		<div class="dot"><button data="<?php echo $x; ?>" type="button" class="dot-button <?php echo $active; $active = ''; ?>">b</button></div>
	<?php } ?>
</div>
<div class="gallery-arrows">
    <i class="fas fa-arrow-left"></i>
    <i class="fas fa-arrow-right"></i>
</div>


</div>

I emphasize simplicity when building solutions. I avoid including superfluous code libraries when a vanilla technique works. It’s helpful to keep track of solutions I engineer, and try to reuse them where they fit. And when they need to be adjusted to work with a new problem, I enhance them while still trying to avoid complexity.

Easy image carousel

image software

On a recent project, I needed a simple image carousel on the homepage. And then, on the gallery page I needed a fully polished solution. Sometimes, using a framework is the right choice. Others, a fully built out toolkit can be overkill.

The Vanilla Option

First, here is the home-rolled version that I came up with. It was integrated into a custom WordPress template. I loop through a set of posts within my carousel wrapper, creating a slide div with that record’s featured image. I keep track of how many slides get built. Beneath the carousel wrapper I create a navigation div, and build a dot button for each slide. Each dot gets an index assigned to it, saved to its button’s data attribute.

HTML:

<div class="ap-carousel">

<?php $num_slides = 0; foreach($posts as $post){ $num_slides++; ?>

	<div class="ap-slide">
		<a href="<?php the_permalink($post->ID); ?>" title="<?php the_title(); ?>">
			<img src="<?php echo esc_url(get_the_post_thumbnail_url($post->ID)); ?>" class="zoom">
		</a>
	</div>

<?php } ?>
<div class="nav-dots">
	<?php $active = "active-dot"; for($x = 0; $x < $num_slides; $x++){ ?>
		<div class="dot"><button data="<?php echo $x; ?>" type="button" class="dot-button <?php echo $active; $active = ''; ?>">b</button></div>
	<?php } ?>
</div>


</div>

CSS:

I used CSS animation to create a fade effect between slides. I position the navigation dots using CSS flexbox layout.

.ap-carousel{
	position: relative;
}
.ap-slide{
	display: none;
	margin: 0 auto;
}	 
.ap-slide img{
	width: auto;
	display: block;
	margin: 0 auto;
	max-height: 90vh;
	-webkit-animation-name: fade;
	-webkit-animation-duration: 1.5s;
	animation-name: fade;
	animation-duration: 1.5s;
}
@-webkit-keyframes fade {
	from {opacity: .4} 
	to {opacity: 1}
}
@keyframes fade {
	from {opacity: .4} 
	to {opacity: 1}
}
.nav-dots{
	display: flex;
	justify-content: center;
}
.dot button{
	display: block;
	border-radius: 100%;
	width: 12px;
	height: 12px;
	margin-right: 10px;
	padding: 0;
	border: none;
	text-indent: -9999px;
	background: black;
	cursor: pointer;
}
.dot button.active-dot{
	background: red;
}

JavaScript:

Finally, I create a JS function to change the slide and active dot based on a timer. I attach an event listener to the dots that will change the active slide based on the saved index data.

var slideIndex = 0;
showSlides();

function showSlides() {
	var i;
	var slides = document.getElementsByClassName("ap-slide");
	var dots = document.getElementsByClassName("dot-button");
	for (i = 0; i < slides.length; i++) { slides[i].style.display = "none"; dots[i].classList.remove("active-dot"); } slideIndex++; if (slideIndex > slides.length) {slideIndex = 1} 
	slides[slideIndex-1].style.display = "block"; 
	dots[slideIndex-1].classList.add("active-dot")
	setTimeout(showSlides, 5000); // Change image every 5 seconds
}

document.addEventListener('click', function(event){
	if(!event.target.matches('.dot-button')) return;

	slideIndex = event.target.getAttribute("data");
	showSlides();
}, false);

That’s a simple and lite solution. It worked fine for the homepage of this recent project, but the main gallery page needed something more complex. I choose Galleria, a JavaScript framework.

The Framework Option

Carousel showcasing artwork
Carousel showcasing artwork

I implemented this option onto the WordPress category archive page. For this project, each piece of artwork is its own post. In my category template file I loop through posts, and populate a JSON object with the data about each slide. Initially, I had built HTML elements for each slide, but that caused slow page load times. The JSON data option is significantly faster. Here’s what my code setup looked like:

<div id="galleria"></div>
<script type="text/javascript">
	window.galleryData = [];
</script>
<?php if (have_posts()): while (have_posts()) : the_post(); 

$featured_img_url = get_the_post_thumbnail_url(); 

?>

<script>
window.galleryData.push({ image: "<?php echo esc_url($featured_img_url); ?>", artinfo: "<div class='galleria-img-info'><h3 class='title'><a href='<?php the_permalink(); ?>'><?php the_title(); ?></a></h3><?php $size=get_post_meta(get_the_ID(), 'size', true);$size=addslashes($size);$date=get_post_meta(get_the_ID(), 'date', true);$materials=get_post_meta(get_the_ID(), 'materials', true);if(! empty ( $size ) ){echo '<p><strong>Dimensions:</strong> ' . $size . '</p>';}if(! empty ( $date ) ){echo '<p><strong>Date:</strong> ' . $date . '</p>';}if(! empty ( $materials ) ){echo '<p><strong>Materials:</strong> ' . $materials . '</p>';} ?><p class='you-can-mouse'>You can click the image to enlarge it. </p></div></div>" })
</script>

<?php } ?>

<script src="/galleria/galleria-1.5.7.js"></script>
<script type="text/javascript">
// Load the classic theme
Galleria.loadTheme('/galleria/galleria.classic.min.js');
//https://docs.galleria.io/collection/25-options
Galleria.configure({
    imageCrop: false,
    transitionSpeed:1000,
    maxScaleRatio:1,
    swipe:true,
    thumbnails: 'none',
    transition: 'fade',
    lightbox: true
});
// Initialize Galleria
Galleria.run('#galleria', {dataSource: window.galleryData, autoplay: 5000, extend: function() {
            // var gallery = this; // "this" is the gallery instance
            // gallery.play(); // call the play method
        }
});

Galleria.ready(function() {
        	
	$(".loading").hide();
		this.bind('image', function(e) {
	});

});
 </script>