Author Archives: Jeff

Open Files and Folders in Sublime Text from Terminal

This post is the first in what will hopefully be a weekly tip or trick focused on workflow since I think many of these things can be helpful across the board for a lot of folks.

For a long time, I feel like I’ve relied too heavily on brute force to get a lot done. When there are tons of things to do, just work harder.

That philosophy has served me well, but eventually you reach the upper limit of what you can do with this methodology. I think I’ve been there for awhile now, but it just took some time to realize that.

So, in an effort to work smarter around development efforts, I’m going to capture a bunch of small tips, tricks, shortcuts, and tools that should help me (and other people) develop smarter workflows.

How to Save a Few Keystrokes

Once you start doing stuff in the command line, it becomes awkward to have to jump between typing commands and using the mouse and GUI to do things in concert. For example, when creating a new project or prototype, you might do something like this:

cd desktop
mkdir project && cd project 
touch index.html 
touch app.js
touch style.css 
git init

So, in a few keystrokes, I’ve got my project folder and assets created and hooked up with Git, but from here I have to open the folder in Sublime Text by either dragging the folder to the application icon in the dock or opening the app and using the menus.

This is something I’m going to call context switching, which has an actual definition in computer science, but for this purpose we can consider it any action that makes you switch contexts: going from one application to another, switching from typing to using the mouse, etc.

Just like in computer processing, context switching is costly for humans as well. So, the longer we can remain in a single context, the more productive we can be.

I’ve seen a lot of people use a handy terminal shortcut to open up files and folders in Sublime Text, so I finally decided to set this up. It turns out it was very easy with a few commands.

Sublime Text Made Easy

First, make sure that everything is set up correctly. If you run the following command, it should open up Sublime Text:

open -a /Applications/Sublime\ Text.app

If Sublime Text opens, all is good with your setup. After that, we just need to add one line to your .bash_profile, and you’re good to go.

#open up .bash_profile for editing
nano ~/.bash_profile
#add this command to the end of .bash_profile
alias sublime="open -a /Applications/Sublime\ Text.app"
#then run this command to refresh the .bash_profile with this new command
source ~/.bash_profile

Now you should be able to use the sublime command to open up files and folders from the command line.

sublime myproject

Enjoy the benefits of some focused productivity! Be on the look out for future workflow tips in the coming weeks.

The post Open Files and Folders in Sublime Text from Terminal appeared first on Jeff Everhart.

Software Development is Hard

Software Development is Hard

I’m writing this post at the end of a particularly odd week in terms of things going wrong. And this is the type of post that you write with fingers crossed, praying that you don’t have to come back later and update something because the universe decided you shouldn’t open your big fat mouth.

I’ve already done that once today:

For as long as I’ve been in a technology, you’d think that stuff catching fire would roll off my back by now, but the truth is the anxiety of things breaking never really goes away. In reality, it really only seems to escalate the closer and closer you get to the source. It’s easy to say “Sorry, but it looks like that service is down. Our partner is working on it,” but much harder to do when you literally manage the server that is down or you wrote the thing that is broken.

There is a ton of flowery idealism about working in the coding profession nowadays, which is definitely justified. I spend most of my days, even in spite of weeks like this one, engaged in some of the most thoughtful and creative discussions on a daily basis with tons of smart people. But there is certainly a deeper truth that underlies all of the positives:

 Software development is hard.

The rest of this post will be a thought provoking, and hopefully humor inducing, look at some of the things that make this type of work hard when it gets hard.

The Pace of Change is Blistering

We all know that technology changes at a rapid pace, but I would argue that the tool belt the modern web developer uses to create the products most consumers think of as groundbreaking changes at twice the speed. It has gotten to the point that some pieces of tech experience their full life cycle before your very eyes. Something will be born, live, and then die all before you’ve shipped a large project.

While this is something that has been talked about extensively in the JavaScript community, it is also a huge barrier to getting new people into the field and is disincentive to learn most tools deeply for fear they will become obsolete before we can use them effectively.

Software Development is Hard

At the end of the day, a lot of these tools are interchangeable and there is some amount of carryover, but it still leaves us feeling whiplashed.

We are Writing for Humans, not Computers

This is perhaps the single biggest lesson I learn over and over again, and it becomes more important as your projects consume and interact with other people’s projects. So you need to be acutely aware of this dynamic as you develop projects for open release or when you use, modify, or maintain the work of others.

When you are writing a plugin or library, try to think about the things that other people in the course of day-to-day will do that could clash with your code.

In other words, if people are very likely to use the variable $index  when writing their own programs, try to make your code more resilient and use a variable name specific to the context.

Like humans working any other job, sometimes things don’t make sense, aren’t well documented, or are just plain overlooked in the course of accomplishing some other, more important objective.

It’s very easy to leave a Google Maps API Key tied to your email in a production system at an old job that chugs away for months until someone catches it.

It’s very easy to assume something that works and make sense to you now will do so 6 months from now to yourself or other people.

It’s hard sometimes to get to a level of understanding about the situation that allows us to accept these inevitable miscommunications and trust the other developers who make our lives easier on daily basis, in one way or another.

// ¯\_(ツ)_/¯ Dont you dare delete this comment for it is magic

 

Creating Sustainable Patterns

One of the things we’ve been really looking at hard at ALT Lab is how we structure our development processes internally. Right now we’re working through how to make sure we can all collaborate and work on code in GitHub without creating conflicts, figuring out the best way to automate deployments to keep production and source control at parity, and pondering how to structure theme or plugin updates when literally 1000’s of sites have dependencies on them in unknown ways.

All of this while remaining fast and agile. Innovation is our bread and butter, so there is no room for unsustainable process in the name of safety.

Like most things in development, there isn’t a RIGHT answer here, just lots of options with associated trade-offs.

There are certain laws that I use to guide most of my decisions, like 80/20, think win-win, but continually I’m bashed over the head with the law of diminishing returns in my developer life.

Software Development is Hard

Like most learning curves, at first things are glorious, but your typical effort quickly stops producing the typical results. From my perspective, any investment in refining a development process is destined to follow the same trajectory. At some point, the tools and processes meant to make our lives easier will begin to work against us if we continue beyond a certain point.

However, this is the essential quagmire of the situation.

It’s easy to look at a chart from profile and pinpoint when we start to regress, but it’s much harder to do when you are standing on the chart.

It’s like trying to get a view of the mountains while you’re looking at your feet while you hike.

But these are they types of decisions that need to be made, and just other reason to quietly whisper the refrain:

damn, this stuff is hard.

 

 

 

 

The post Software Development is Hard appeared first on Jeff Everhart.

Software Development is Hard

Software Development is Hard

I’m writing this post at the end of a particularly odd week in terms of things going wrong. And this is the type of post that you write with fingers crossed, praying that you don’t have to come back later and update something because the universe decided you shouldn’t open your big fat mouth.

I’ve already done that once today:

For as long as I’ve been in a technology, you’d think that stuff catching fire would roll off my back by now, but the truth is the anxiety of things breaking never really goes away. In reality, it really only seems to escalate the closer and closer you get to the source. It’s easy to say “Sorry, but it looks like that service is down. Our partner is working on it,” but much harder to do when you literally manage the server that is down or you wrote the thing that is broken.

There is a ton of flowery idealism about working in the coding profession nowadays, which is definitely justified. I spend most of my days, even in spite of weeks like this one, engaged in some of the most thoughtful and creative discussions on a daily basis with tons of smart people. But there is certainly a deeper truth that underlies all of the positives:

 Software development is hard.

The rest of this post will be a thought provoking, and hopefully humor inducing, look at some of the things that make this type of work hard when it gets hard.

The Pace of Change is Blistering

We all know that technology changes at a rapid pace, but I would argue that the tool belt the modern web developer uses to create the products most consumers think of as groundbreaking changes at twice the speed. It has gotten to the point that some pieces of tech experience their full life cycle before your very eyes. Something will be born, live, and then die all before you’ve shipped a large project.

While this is something that has been talked about extensively in the JavaScript community, it is also a huge barrier to getting new people into the field and is disincentive to learn most tools deeply for fear they will become obsolete before we can use them effectively.

Software Development is Hard

At the end of the day, a lot of these tools are interchangeable and there is some amount of carryover, but it still leaves us feeling whiplashed.

We are Writing for Humans, not Computers

This is perhaps the single biggest lesson I learn over and over again, and it becomes more important as your projects consume and interact with other people’s projects. So you need to be acutely aware of this dynamic as you develop projects for open release or when you use, modify, or maintain the work of others.

When you are writing a plugin or library, try to think about the things that other people in the course of day-to-day will do that could clash with your code.

In other words, if people are very likely to use the variable $index  when writing their own programs, try to make your code more resilient and use a variable name specific to the context.

Like humans working any other job, sometimes things don’t make sense, aren’t well documented, or are just plain overlooked in the course of accomplishing some other, more important objective.

It’s very easy to leave a Google Maps API Key tied to your email in a production system at an old job that chugs away for months until someone catches it.

It’s very easy to assume something that works and make sense to you now will do so 6 months from now to yourself or other people.

It’s hard sometimes to get to a level of understanding about the situation that allows us to accept these inevitable miscommunications and trust the other developers who make our lives easier on daily basis, in one way or another.

// ¯\_(ツ)_/¯ Dont you dare delete this comment for it is magic

 

Creating Sustainable Patterns

One of the things we’ve been really looking at hard at ALT Lab is how we structure our development processes internally. Right now we’re working through how to make sure we can all collaborate and work on code in GitHub without creating conflicts, figuring out the best way to automate deployments to keep production and source control at parity, and pondering how to structure theme or plugin updates when literally 1000’s of sites have dependencies on them in unknown ways.

All of this while remaining fast and agile. Innovation is our bread and butter, so there is no room for unsustainable process in the name of safety.

Like most things in development, there isn’t a RIGHT answer here, just lots of options with associated trade-offs.

There are certain laws that I use to guide most of my decisions, like 80/20, think win-win, but continually I’m bashed over the head with the law of diminishing returns in my developer life.

Software Development is Hard

Like most learning curves, at first things are glorious, but your typical effort quickly stops producing the typical results. From my perspective, any investment in refining a development process is destined to follow the same trajectory. At some point, the tools and processes meant to make our lives easier will begin to work against us if we continue beyond a certain point.

However, this is the essential quagmire of the situation.

It’s easy to look at a chart from profile and pinpoint when we start to regress, but it’s much harder to do when you are standing on the chart.

It’s like trying to get a view of the mountains while you’re looking at your feet while you hike.

But these are they types of decisions that need to be made, and just other reason to quietly whisper the refrain:

damn, this stuff is hard.

 

 

 

 

The post Software Development is Hard appeared first on Jeff Everhart.

Audiographics: Annotating Sound

Audiographics: Annotating Sound

Over the last several weeks, I’ve been working on several prototypes to help facilitate different types of annotations. Most educators are already pretty familiar with the typical textual annotation, but as new media becomes more important, we ought to have tools that facilitate annotation on other types of media as well.

What is an audiographic?

Which leads me to my first prototype of audio annotation, something that I’m calling an audiographic, a play on the omni-popular infographic but also a nod to the powerful contradiction we get when we combine something that was solely auditory with a visual component.

One of the reasons that annotation can be so powerful as a learning strategy is that it immediately moves someone from passive consumption to active creation. Even if we are scrawling in the margins of a book, we are participating in the creation of the narrative around the text, whether or not we are the only ones with access to the annotations themselves.

For most people this is a straightforward interaction with text or the written word. We are used to taking notes while reading, but for many people other media types like audio, video, and images remain a bit more inaccessible.

Making music, painting, drawing, or sculpting are all things that OTHER people do, those creative folks lucky enough to be born with an artistic impulse. Hopefully, audiographics will allow people other than typical artists to join the conversation as well and articulate what they are hearing.

You can see an example of the plugin below:

So What by Miles Davis

This tune leads in with some rubato improv by the piano that helps to emphasized the Dorian mode of the piece. The progression is an AABA structure, and really only uses a few chords. The half step raise during the B section helps to build some tension, while the repeated measures in D minor allow for some pretty melodic phrasing. 

Around this mark, Miles makes his entrance with some long phrases. In between phrases, you can hear the piano continue a soft vamp around the D minor/ E flat minor progression. Audiographics: Annotating Sound

Around here, the bass starts playing the iconic head for this tune. The two horn stabs after each phrase played by the bass even sound like someone saying "So What."

Audiographics: Annotating Sound



Current Time: {{formattedTime}}

{{currentSegment.segmentName}} ({{formatTime(currentSegment.startTime)}} - {{formatTime(currentSegment.endTime)}})

One of the cool things about the idea behind an audiographic is the ability to add different types of media to particular audio segments: we can include text to add additional context to a particular segment, images to help build a cultural understanding of the movement or period, and even other audio segments that might be similar to/different from the main piece we are analyzing.

Other Applications

Down the road, there are also lots of other applications for this type of tool outside music. People involved in oral histories could use this pattern to add context to a narrative. People involved in discourse analysis could use this tool to segment and code different pieces of spoken discourse. With some additional tooling, groups could collectively annotate other audio resources, like popular podcasts, as a part of a class assignment.

Open Source Technology

Since this project is a larger initiative, and not just a bespoke tool, I’ve decided to document the technical details in a longer post dedicated to that subject, but the open source ethos behind it is important enough to mention hear as well.

To make this tool widely available and easy to use, we decided to make it into a WordPress Plugin, which you can find in an evolving form here on GitHub. It builds on and extends other open source technology created by the BBC.

It’s amazing to me how the tech community can be altruistic and entrepreneurial at the same time. While most enterprises focus solely on commercial viability, the development community has smartly realized that strong open source practices allow for wider dissemination of its chief products.

Because of this, we can start to leverage this technology into some real cost savings from an educational perspective. All coolness of the audiographic aside, this plugin gives music instructors the ability to create and publish their own listening guides, something students would have typically shelled out $100’s of dollars for to a textbook company.

However, perhaps more importantly, they can now own the means of production. The web-based portals with annotated audio files are no longer something produced for them, rather they are now something within their ability to create as a larger process of demonstrating competency in close listening and appreciation.

All edu-rants aside, this is a very fun project that has challenged me to learn some additional stuff about JavaScript Web APIs, engage with and modify open source projects form a big name player like the BBC, and work through how to structure a WordPress plugin designed for wide usage and distribution.

 

 

 

The post Audiographics: Annotating Sound appeared first on Jeff Everhart.

The Future of JavaScript and the Browser

I’ve been doing a deep dive into Vanilla JavaScript lately, partly as a pretty overt reaction to the proliferation of client-side MV* frameworks. My thought process is that the time spent learning frameworks isn’t necessarily transferrable beyond some of the high-level concepts they help you solve, things like routing, data-binding, and dependency injection. But, once you learn what data-binding is with AngularJS, it’s pretty much the same in Knockout, React, or Angular 2, although it may be implemented differently using different tooling.

As the JavaScript standard continues to evolve, it’s my feeling that we should need these types of frameworks less and less as the problems they solve should be built into the language itself.

I want to take a second and just and talk about some of ways in which I see this transformation happening using some features of ES6.

JavaScript | Fetch and Promises

Most modern applications have some need to manage AJAX requests, so numerous libraries have tried to simplify this common problem. Although not too difficult to grasp on its own, there is a certain tedium in handling all of the particular states of an XMLHttpRequest. jQuery is a more popular example of taking this pattern from the vanilla XHR to something as quick as $.ajax.

The downside is that as we abstract away from the lower-level APIs (as low-level as JavaScript can be), we also lose some understanding of what is going on underneath the hood.

Therefore, it’s great to see the browsers implementing parts of the ES6 standard that simplify this in a way that no library is needed. Let’s take a look at a few examples.

var xhr = new XMLHttpRequest();
xhr.open('GET', '/some-path-here.css', true);

xhr.addEventListener('load', function () {
  console.log(this.responseText); 
});

xhr.send();

This is a pretty bare bones AJAX request, so imagine this with more config boilerplate setting headers or a more intensive call back. There is a clear pattern here: create a new XHR object, open a connection, register a callback, send the request, run callback on response. However, once we have the need to process and use the data in the initial response for subsequent calls, the pattern breaks down.

Additionally, to do error handling in this paradigm, we’d also need to register another event listener and callback. It’s pretty easy to see how that could get messy quickly.

Fetch to the Rescue

Not that they are perfect, but Promises make this much easier to handle by addressing the need to process and build data objects from successive network calls.

Nowhere is this utility better illustrated then in the use of the new Fetch API. Although I’m not sure this has wide enough browser adoption to use in production with a polyfill, the future looks very promising with this handy little addition.

var myImage = document.querySelector('img');

fetch('/some-data-path-here.json')
.then(function(response) {
 return response.text(); 
})
.then(function(text){
console.log(text)
})
.catch(function(error){
console.log(error); 
});

While there are still more than a few callbacks in this pattern, at least their execution happens in what appears to be a synchronous manner. In typical async patterns, it’s easy to get lost in callback hell, but the then/catch pattern makes things a whole lot simpler.

I’ll be looking forward to using the Fetch API liberally when its browser support grows, until then I’m happy to create my own wrappers around the standard XMLHttpRequest and promises.

 

The post The Future of JavaScript and the Browser appeared first on Jeff Everhart.

Command Line: Download Files with cURL Command

I’m not sure how I didn’t know about this command sooner since I use cURL for a few other tasks. I’ve been using WGET to download remote files, but I recently stumbled across this new little shortcut that should save me a few seconds here or there.

If you’d rather just watch a short video, here it is:

I’m putting it here so that I don’t forget about it. Link all of the best commands, there are a few variations.

Naming the Output File

This first variation lets you download the remote file into a new file with a name you supply:

curl http://www.gutenberg.org/cache/epub/1322/pg1322.txt -o leavesofgrass1855.txt

Here we call the cURL command, give it a url to some resource to download, the use the -o flag to specify an output file. This would result in a file called leavesofgrass1855.txt being created with the results of the download.

Using Original File Name

This second variation just downloads the file using the original name of the remote file by using a different flag.

curl -O http://www.gutenberg.org/cache/epub/1322/pg1322.txt

Now, we’ve got a file with this same name in the working directory.

Benefits of cURL command

There are lots of other things you can use the cURL command for, so I’ll be sure to cover some of those additional topics in future videos. If you can learn a few helpful commands here and there, before you know it you will be using the command line all the time to speed up development workflows and automate repetitive tasks.

Here is a link to the docs for the CURL command for further reading.

 

 

 

 

The post Command Line: Download Files with cURL Command appeared first on Jeff Everhart.

WordPress NGINX Proxy Server Subdomain to Subdirectory

WordPress NGINX Proxy Server Subdomain to Subdirectory

For a recent project, I implemented an NGINX proxy server to proxy requests from a WordPress installation on a subdomain so that they appear to come from a subdirectory of the main domain.

Good 
http://maindomain.com/blog
Bad 
http://blog.maindomain.com

There a lots of SEO and technical benefits to setting up a proxy server like this, a few of which I’ll go over in this post.

However, this was not as straightforward of a task as I hoped, and I consulted numerous other resources out there, but none of them seemed to cover 100% of my use case, so I decided to write up this blog post. Here is a quick preview of the resulting architecture:

 

Why to install a proxy server

Proxy servers are basically the traffic cops of the internet. They act as a gateway to other parts of your application and shuttle requests to and from other servers so that you have the ability to really dial in you architecture.

There are also lots of reasons to use a proxy server from an SEO perspective, which is the main reason I decided to use one for this project. This infographic from Moz sums up the arguments for using a proxy server nicely.

Initial Setup

In this case, there is a NodeJS application sitting on the main server, with the DNS records for the main domain pointing at it in Amazon Web Services. It seemed in bad taste to put the WordPress installation on the same server as the Node app since they have slightly different requirements and use cases. Instead, the simplest route was to spin up another server and point a subdomain at it. The resulting architecture looked like this:

www.maindomain.com –> NGINX proxy –> Node app

blog.maindomain.com –> Apache –> WordPress

At first, everything was great. All of the servers were easy to manage since they each had a specific purpose. I could optimize the performance of each on independently, think heavy caching on WordPress server and quick response time for Node API.

Then, the results of SEO indexing started to roll in. Since Google treats subdomains as almost separate domains, none of the SEO juice from the blog subdomain was trickling up to the main domain. Since the primary reason for the blog in the first place was content marketing, this seemed like a major defeat.

To get the desired results, I needed to serve all of the WP content from the main domain. This is the ideal structure:

www.maindomain.com/blog

Since putting both applications on the same server still seemed out of the question, I decided to explore the proxy server route. This allowed me to keep the blog on its own server on a subdomain, but also get the SEO benefits of having the blog served as a subdirectory.

Setting Up the NGINX Proxy Server 

As it turns out, NGINX, which also acts as a proxy for the Node app, is great a being a proxy, so I’ll talk a little bit about how to set that up below.

server {

location /blog {
     proxy_pass  http://blog.maindomain.com
}

location / {
     proxy_pass http://nodeapp
}

}

Inside of the main NGINX server block, you can set up a separate location directive for a particular path, in this case anything pointing at /blog. Using proxy_pass, you can hand off that traffic to another server, in this case the blog subdomain, to get the results, and then NGINX will pass the results back to the browser.

It would have been great if things were that simple for me. In many of the examples I saw, this is where the directives stop. Oh, just proxy all requests to the /blog path, and you’re good.

To get all of my stuff working right, I had to add in a few other location directives to get to the right folders on the WP site. They ended up looking like this:

location /blog/wp-content {
    proxy_pass  http://blog.maindomain.com/wp-content;
} 

location /blog/wp-includes {
    proxy_pass http://blog.maindomain.com/wp-includes;
}

location /blog/wp-login.php {
    proxy_pass http://blog.maindomain.com/login.php
}

location /blog/wp-admin {
   proxy_pass http://blog.maindomain.com/wp-admin
}

These directives handled the remaining things that were getting returned with a 404 error with just the initial directive. I’m sure this was due to some Apache rewrite rules, but those proved more illusive to track down.

There was still a lot of configuration to do in WordPress itself and on its Apache server, but I recommend setting up the NGINX proxy_pass rules first to test them out before you change the address of you WordPress site and shut off outside traffic to the subdomain.

Configuring WordPress and Apache to Handle the Proxy

In addition to getting all of the server rules in place, there are a couple of things you need to do within WordPress itself to get everything ready to go.

The first step is changing the Site Address and Home URL. There are a couple of ways to do this: through the WP-Admin interface, the wp-options table in the database, or in the wp-config.php file. My recommendation would be to use either the first or second option. I was able to use the first.

If log into WP, and go to Settings > General, you will see a few items that look like this: WordPress NGINX Proxy Server Subdomain to Subdirectory

The WordPress address is the address of the WP installation, this is where other files go looking for files, images, etc. The Site Address is what WordPress uses to set the address in the address bar and write out links to posts, pages, etc.

In my case, I ended up changing both of these to http://maindomain/blog so that all URLs were written relative to that path, but also so that all traffic for scripts, stylesheets, and images came through the proxy server. Some examples I saw had the Site Address set to http://maindomain/blog and the WP Address set to http://blog.maindomain.com.

Controlling Traffic to Subdomain

I actually tried that as well, and from a technical perspective everything worked fine with the proxy server. All of the URLs were http://maindomain.com/blog, but all of the include and content requests were still made to the subdomain.

For some implementations, that would have been fine, but since the goal was to not have the subdomain indexed at all, I decided to route all traffic through the proxy server so that I could prepare for the next step.

Shutting Off External Traffic to Apache and WP

With all of the files coming in correctly using the proxy traffic rules, it was time to shut off traffic from the sources I didn’t want, namely Google, its spiders, and the rest of the internet.

I muddled with a few options to do this. One option I considered was passing a header value from the proxy server to Apache to check against, which might have been a better decision for a group of proxy servers, but since there was just the one machine, checking its IP address seemed like a good solution.

<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
# RewriteCond %{HTTP:Proxy-Forward} !^True$
RewriteCond %{REMOTE_ADDR} !=11.111.11.111
RewriteRule ^(.*)$ - [L,R=404]
</IfModule>

To get this working, you can add a rewrite rule in your .htaccess file. I kept the custom header example in the comments in case that idea might help someone. Make sure to put this before any default configuration added by WordPress.

I had some debate here about what HTTP status to return for traffic from the wild, but I finally settled on 404. 403 Forbidden made it sound interesting, and I want the search engines and everyone else to ignore it. Had the initial blog content been out there for a long time, I might have tried to get a 301 redirect to work. With so many failed redirect loops early in the process, it seemed only fitting to start fresh and squash the old content.

Cleaning Up WP and the WP Database

It’s up to you whether to do this step before or after cutting off Apache traffic, but you can’t forget this step if you expect a good user experience. There are two things to look out for here:

  1. Hardcoded theme references
  2. Database references

If your WordPress theme is pretty stock, or if you haven’t made a lot of modification on your own, you might not need to look for these. However, with a project of any decent size, there is a good chance that someone left a link to an old resources or hardcoded something into a stylesheet or src attribute.

An easy way to look at this is by just browsing the site. Usually, this can spot most issues, but its also worth a go to search the files quickly for any references to the site. You can do this with a Linux command like this one:

find /html -type f -exec grep -H 'text-to-find-here' {} \;

There are a few other variations of commands you could use, and I got that from this post on StackOverflow.  I ended up having to edit a few theme files on the server to swap out an absolute URL with a relative URL.

Hopefully, you don’t find anything hardcoded, but if you do update it, and let’s move on.

Lastly, you need to update the WordPress Database for references to the old site address. I chose to use a plugin for this, and I’m thankful I did. It took about 7 minutes to run, so I’m only sure how long it would have taken me to script that whole process for all of the WP tables!

I opted for a really simple plugin called Find and Replace All. It’s not fancy, but it does the one thing I needed it to do. Not a lot of ratings, but a ton of installs. Even though this plugin generates an SQL backup for you, I’d recommend creating one yourself as well.

Resources

Like I said, I consulted a bunch of resources to get this set up working. Here they are in no specific order:

https://codex.wordpress.org/Moving_WordPress

Nginx / WordPress — Proxy Subdirectory to WordPress Subdomain

Unix — Setup WordPress on Apache PHP5 through Nginx Reverse Proxy

https://www.digitalocean.com/community/questions/nginx-as-proxy-to-wordpress

https://www.digitalocean.com/community/questions/nginx-proxy_pass-to-wordpress-on-remote-server

The post WordPress NGINX Proxy Server Subdomain to Subdirectory appeared first on Jeff Everhart.

Creating a traffic light with Raspberry Pi and Python

Creating a traffic light with Raspberry Pi and Python

There are endless projects you can make with Python, the Raspberry Pi, and just a few LEDs. One obvious and really fun project is a button operated traffic light. This post will build on an earlier project about connecting a button using a breadboard and the RPi.GPIO library, so if you have any questions about the initial setup of the electronics, check out that post.

In this post, I want to take a little bit more time to write out a good Python program to handle our traffic light. If you want to get the code, you can check out the GitHub repo here or run this command in a terminal:

$ git clone https://github.com/JEverhart383/raspberry-pi-gpio.git

Breadboard Settings

Here is the wiring configuration I used for the electronic components of the Pi traffic light project.

Creating a traffic light with Raspberry Pi and Python

GPIO 19 -> Button + -> Button – -> Ground

GPIO 23 -> Resistor -> Red LED -> Ground

GPIO 12 -> Resistor -> Yellow LED -> Ground

GPIO 16 -> Resistor -> Green LED -> Ground

The GPIO naming convention is not physical pins, so be sure to keep those naming conventions in mind when setting up your circuits.

Writing Our Python Script

The Python code for this project will be using the same basic structure we looked at in my post on using the cleanup method of the RPi.GPIO library.

Up until now, we’ve been writing pretty spaghetti string code, but as our Raspberry Pi programs become more complex, we’ll want to start writing lasagna or ravioli code instead.

What is the difference between spaghetti code and ravioli code?

Spaghetti code is what it suggests, a big, tangled ball of different code statements put together to make a meal. Each line stands on its own and is responsible for doing something while there is little attempt at structuring the code into reusable components. Let’s take a look at an example:

if input_state == False:
      print('Button Pressed')
      time.sleep(0.2)
      GPIO.output(ledGreen, 1)
      time.sleep(1)
      GPIO.output(ledGreen, 0)
      GPIO.output(ledYellow, 1)
      time.sleep(1)
      GPIO.output(ledYellow, 0)
      GPIO.output(ledRed, 1)
    else: 
      GPIO.output(ledGreen, 0)
      GPIO.output(ledYellow, 0)
      GPIO.output(ledRed, 0)

First, let’s look at the first piece of code I wrote to make the traffic light. For the most part, it worked the way that I wanted it to. However, if we look at the guts of the program that sets the GPIO output on the pins, it reeks of spaghetti code.

For example, say I wanted to change from a one second delay between lights to two seconds, or half a second. Using this method, I’d have to update every instance of time.sleep() with the delay that I wanted.

One of the ways you can combat spaghetti code is to organize your code into functions, which are reusable code blocks that can make your programs more flexible. Let’s look at an example of how I organized this code into a function called lightTraffic():

def lightTraffic(led1, led2, led3, delay ):
    GPIO.output(led1, 1)
    time.sleep(delay)
    GPIO.output(led1, 0)
    GPIO.output(led2, 1)
    time.sleep(delay)
    GPIO.output(led2, 0)
    GPIO.output(led3, 1)	
    time.sleep(delay)
    GPIO.output(led3, 0)

lightTraffic(ledGreen, ledYellow, ledRed, delay)

This function is one small step toward optimizing the code and its reusability. First, I define and name the function and then specify four parameters that I pass into the function: led1, led2, led3, delay. Later, when we call that function, we pass in the values we want the function to use to replace our parameter placeholders.

Technical Aside: Explaining Function Parameters

It’s good to think of a parameter as a placeholder for a value you specify later when calling the function. Within the function you can use that parameter as a placeholder. For example, I pass a parameter called delay into the lightTraffic() function. Inside of the function, anywhere I want to use that delay value, I simply put delay: time.sleep(delay). When I run the function and pass it actual values, in this case 1, the function replaces all instances of the delay parameter with the value of 1 that I passed in.

Calling Our Function

Finally, now that we have all of our traffic light code organized into a function, we need to call or execute that function when the button is pressed. Check out the finished script here:

import RPi.GPIO as GPIO
import time
  
try:
  def lightTraffic(led1, led2, led3, delay ):
    GPIO.output(led1, 1)
    time.sleep(delay)
    GPIO.output(led1, 0)
    GPIO.output(led2, 1)
    time.sleep(delay)
    GPIO.output(led2, 0)
    GPIO.output(led3, 1)	
    time.sleep(delay)
    GPIO.output(led3, 0)	
  GPIO.setmode(GPIO.BCM)
  button = 19
  GPIO.setup(button, GPIO.IN, pull_up_down=GPIO.PUD_UP)
  ledGreen = 16
  ledYellow = 12
  ledRed = 23
  GPIO.setup(ledGreen, GPIO.OUT)
  GPIO.setup(ledYellow, GPIO.OUT)
  GPIO.setup(ledRed, GPIO.OUT)
  while True:
    input_state = GPIO.input(button)
    if input_state == False:
      print('Button Pressed')
      lightTraffic(ledGreen, ledYellow, ledRed, 1)
    else: 
      GPIO.output(ledGreen, 0)
      GPIO.output(ledYellow, 0)
      GPIO.output(ledRed, 0)
except KeyboardInterrupt:
  print "You've exited the program"
finally:
  GPIO.cleanup()

Now, we are starting to reap the benefits of ravioli code. Our lightTraffic function is a nice self-contained piece of code, while our while loop that reacts to input is another ravioli. There are tons of other way I could have made this code more reusable, so if you have a cool idea or solid suggestion, leave a comment below.

The post Creating a traffic light with Raspberry Pi and Python appeared first on Jeff Everhart.

Contact us

Academic Learning Transformation Lab - ALT Lab
1000 Floyd Ave, Suite 4102 | Richmond, Virginia 23284
altlab@vcu.edu | 804-827-5181

Last updated: April 27, 2016

Virginia Commonwealth University

HTML tutorial