Author Archives: Jeff

Making a Chatbot with Amazon LEX

Making a Chatbot with Amazon LEX

What follows here is an exploration of an evolving project I’m working on to provide some additional touch points for current and prospective students in online courses at VCU.

Chatbots, AI, Machine Learning, and other terms with similar connotations seem to be all the rage nowadays, but using publicly available cloud services, we can get pretty close to creating some powerful new tools.

What is a Chatbot?

First, let’s get this definition out of the way. Amazon bills its Lex service as “a service for building conversational interfaces into any application using voice and text.”

And this is a pretty good way of thinking about what a chatbot really is, an interface. At the end of the day, most of us don’t talk or write just for our own enjoyment, we do so to produce results, get information, or make something happen.

With a chatbot, or conversational interface, we can allow people to arrive at those ends using natural language instead of an interface that I might construct out of buttons and form fields.

However, the metaphor of the interface is pretty apt here, as we are still essentially inputing data by talking or typing and getting data back from some backend service.

Making a Chatbot with Amazon LEX

While the chatbot becomes our interface, we can interact with that chatbot over a number of different channels using (almost) turn key integrations into Amazon Lex. For example, I was able to make our chatbot available through Twilio SMS and Slack in a few hours.

We can also easily have our chatbot interact with any backend services that we want to provide people with the answers they want. On the other end of the spectrum, there are also mechanisms for the human to oversee the bot’s responses, enabling the humans the bot is meant to replace to further train the bot.

Getting Started with a Chatbot using Amazon Lex

This post isn’t going to give a step-by-step guide, as Amazon has produced one of those that is pretty sufficient to get you going.

Rather, I’d like to start at a high level with what types of workflows and vocabulary you need to understand to build your own chatbot to solve whatever issues you are facing.

First, let’s start with the idea of intent.

What do we want to do?

For Amazon Lex, an intent is a specific building block, but at a more philosophical level, most of our language has a specific intent associated with it, either implicitly carried in the utterance (linguist speak for some act of language production, e.g. writing, speaking) itself or explicitly stated.

To start building a chatbot, you need to settle on some intents and then decide what types of utterances people would use to express such an intent.

For example, maybe we have an intent where someone wants information about online courses:

Making a Chatbot with Amazon LEX

Once we decide on the broad stroke of an intent, we need to then add some sample utterances that might express that intent in various ways. This is where the AI takes over and then creates models based on your samples so that any similar phrases trigger the same intent.

It’s worth noting here that there is an ongoing process to refine these prototype utterances. People always interact with systems in unexpected ways, and conversation is perhaps more fraught with those ambiguities. AWS allows you to look at the actual utterances people have used with your bot and add them to intents if they made your bot go WTF!? the first time around.

What do we need to know to help you fulfill intent?

From here we can start by talking about Types, or as Amazon Lex calls them, Slot Types. Slot types are the nouns or adjectives that we need to begin to fulfill the user’s intent.

For example, there are 100’s of online courses at VCU offered each semester, so we need to get some additional information using some prompts meant to elicit specific information.

Making a Chatbot with Amazon LEX

In my example, we need to know something about the course type, e.g. title, subject, discipline, and what level the student is studying at, e.g. undergraduate, graduate.

Here we can mark certain slots as required and provide some prompts for the chatbot to use as it negotiates with the user to get this additional information. Here we have the option of letting Lex intuit the nouns or adjectives necessary for our slots, or we can specify values that the bot will accept, i.e. only allow large, medium, and small as possible values for a PizzaSize slot.

Once all of the required slots are filled into the chatbot session, we can write some additional backend logic to fulfill the users intent.

Give the People What they Want

For each intent you create for your chatbot, you can choose what it means to fulfill a request and how exactly that might fit into the flow of your other systems.

For example, if each request will get the same response, we can write canned messages that get shown when certain intents are fulfilled. But we can also use the chatbot as an interface to a larger system that might get additional information, create an appointment, or order a pizza.

For that reason, the conversational interface that chatbots represent will continue to grow in usefulness. For AWS Lex, this typically means using a Lambda function to connect to these other systems.

While risking over-hype here, talking about chatbots and serverless code in the same blog post, this was something I got running in an hour or so:

const AWS = require('aws-sdk')
const S3 = new AWS.S3({
    maxRetries: 0,
    region: 'us-east-1',
})

const getCourses = () => {
    return new Promise( (resolve, reject) => {
        S3.getObject({
            Bucket: 'your-bucket-here',
            Key: 'your-file.json'
        }, function (err, data){
            if (err !== null){
                reject(err)
            }
            resolve(JSON.parse(data.Body.toString('utf-8')))
        })
    })
}

const processRequest = (request, callback) => {

    const CourseType = request.currentIntent.slots.CourseType
    const StudentLevel = request.currentIntent.slots.StudentLevel


    getCourses().
    then( data => {
        let message
        let courses = data.data.filter(course => {
            return (course.subject_desc.toLowerCase().includes(CourseType))
        })

        if (!courses.length > 0){
            message = "Sorry, we couldn't find any courses in that discipline or at that level. Let me know if want me to look again for something else."
        } else {
            message = `We found ${courses.length} courses: \n
                ${courses.map(course => `${course.subject} ${course.course_number}-${course.section}: ${course.title}`).join('\n')}
            `
        }

        let response = {
            "dialogAction": {
                "type": "Close",
                "fulfillmentState": "Fulfilled",
                "message": {
                    "contentType": "PlainText",
                    "content": message
                }
            }
        }

        callback(null, response)


    }).catch(err => callback(err))

}

This is some example backend logic currently running on our chatbot. When a particular intent is ready to be fulfilled, the chatbot passes a message to this Lambda function written in JavaScript. The script analyzes the course information stored in the chat slots, pulls in a huge array of course data from a JSON file stored in S3, then looks for areas where the course data matches the user input.

The Lambda function then creates a JSON response with a message to the user that it passes back off to the chatbot. While this is a pretty simple MVP for this concept, hopefully you can see that there really isn’t a limit to the sophistication of the types of tasks we can complete using the conversational interface.

At the same time, while the Natural Language Processing used by the chat bot is impressive to someone who’s spent years studying syntax and semantics, bots are not some magical box that will make things happen on their own.

All of the examples created by Amazon exhibit complex application logic that tells the chatbot how to respond based on user input. The chatbot does a good job of analyzing human utterances and saying “Hey, it sounds like they want to do X.” However, even getting this right requires a lot of human intervention throughout the process.

Making a Chatbot with Amazon LEX

Hopefully, this post will be a helpful introduction for some folks interested in the latest chatbot craze, but at the same time I also hope that this will underscore the limits of the ‘AI’ products being touted at present. Creating even a moderately functional chatbot requires much more human involvement than anyone proclaiming wizardry will want to admit, so don’t ignore the man behind the curtain.

The post Making a Chatbot with Amazon LEX appeared first on Jeff Everhart.

Making a Chatbot with Amazon LEX

Making a Chatbot with Amazon LEX

What follows here is an exploration of an evolving project I’m working on to provide some additional touch points for current and prospective students in online courses at VCU.

Chatbots, AI, Machine Learning, and other terms with similar connotations seem to be all the rage nowadays, but using publicly available cloud services, we can get pretty close to creating some powerful new tools.

What is a Chatbot?

First, let’s get this definition out of the way. Amazon bills its Lex service as “a service for building conversational interfaces into any application using voice and text.”

And this is a pretty good way of thinking about what a chatbot really is, an interface. At the end of the day, most of us don’t talk or write just for our own enjoyment, we do so to produce results, get information, or make something happen.

With a chatbot, or conversational interface, we can allow people to arrive at those ends using natural language instead of an interface that I might construct out of buttons and form fields.

However, the metaphor of the interface is pretty apt here, as we are still essentially inputing data by talking or typing and getting data back from some backend service.

Making a Chatbot with Amazon LEX

While the chatbot becomes our interface, we can interact with that chatbot over a number of different channels using (almost) turn key integrations into Amazon Lex. For example, I was able to make our chatbot available through Twilio SMS and Slack in a few hours.

We can also easily have our chatbot interact with any backend services that we want to provide people with the answers they want. On the other end of the spectrum, there are also mechanisms for the human to oversee the bot’s responses, enabling the humans the bot is meant to replace to further train the bot.

Getting Started with a Chatbot using Amazon Lex

This post isn’t going to give a step-by-step guide, as Amazon has produced one of those that is pretty sufficient to get you going.

Rather, I’d like to start at a high level with what types of workflows and vocabulary you need to understand to build your own chatbot to solve whatever issues you are facing.

First, let’s start with the idea of intent.

What do we want to do?

For Amazon Lex, an intent is a specific building block, but at a more philosophical level, most of our language has a specific intent associated with it, either implicitly carried in the utterance (linguist speak for some act of language production, e.g. writing, speaking) itself or explicitly stated.

To start building a chatbot, you need to settle on some intents and then decide what types of utterances people would use to express such an intent.

For example, maybe we have an intent where someone wants information about online courses:

Making a Chatbot with Amazon LEX

Once we decide on the broad stroke of an intent, we need to then add some sample utterances that might express that intent in various ways. This is where the AI takes over and then creates models based on your samples so that any similar phrases trigger the same intent.

It’s worth noting here that there is an ongoing process to refine these prototype utterances. People always interact with systems in unexpected ways, and conversation is perhaps more fraught with those ambiguities. AWS allows you to look at the actual utterances people have used with your bot and add them to intents if they made your bot go WTF!? the first time around.

What do we need to know to help you fulfill intent?

From here we can start by talking about Types, or as Amazon Lex calls them, Slot Types. Slot types are the nouns or adjectives that we need to begin to fulfill the user’s intent.

For example, there are 100’s of online courses at VCU offered each semester, so we need to get some additional information using some prompts meant to elicit specific information.

Making a Chatbot with Amazon LEX

In my example, we need to know something about the course type, e.g. title, subject, discipline, and what level the student is studying at, e.g. undergraduate, graduate.

Here we can mark certain slots as required and provide some prompts for the chatbot to use as it negotiates with the user to get this additional information. Here we have the option of letting Lex intuit the nouns or adjectives necessary for our slots, or we can specify values that the bot will accept, i.e. only allow large, medium, and small as possible values for a PizzaSize slot.

Once all of the required slots are filled into the chatbot session, we can write some additional backend logic to fulfill the users intent.

Give the People What they Want

For each intent you create for your chatbot, you can choose what it means to fulfill a request and how exactly that might fit into the flow of your other systems.

For example, if each request will get the same response, we can write canned messages that get shown when certain intents are fulfilled. But we can also use the chatbot as an interface to a larger system that might get additional information, create an appointment, or order a pizza.

For that reason, the conversational interface that chatbots represent will continue to grow in usefulness. For AWS Lex, this typically means using a Lambda function to connect to these other systems.

While risking over-hype here, talking about chatbots and serverless code in the same blog post, this was something I got running in an hour or so:

const AWS = require('aws-sdk')
const S3 = new AWS.S3({
    maxRetries: 0,
    region: 'us-east-1',
})

const getCourses = () => {
    return new Promise( (resolve, reject) => {
        S3.getObject({
            Bucket: 'your-bucket-here',
            Key: 'your-file.json'
        }, function (err, data){
            if (err !== null){
                reject(err)
            }
            resolve(JSON.parse(data.Body.toString('utf-8')))
        })
    })
}

const processRequest = (request, callback) => {

    const CourseType = request.currentIntent.slots.CourseType
    const StudentLevel = request.currentIntent.slots.StudentLevel


    getCourses().
    then( data => {
        let message
        let courses = data.data.filter(course => {
            return (course.subject_desc.toLowerCase().includes(CourseType))
        })

        if (!courses.length > 0){
            message = "Sorry, we couldn't find any courses in that discipline or at that level. Let me know if want me to look again for something else."
        } else {
            message = `We found ${courses.length} courses: \n
                ${courses.map(course => `${course.subject} ${course.course_number}-${course.section}: ${course.title}`).join('\n')}
            `
        }

        let response = {
            "dialogAction": {
                "type": "Close",
                "fulfillmentState": "Fulfilled",
                "message": {
                    "contentType": "PlainText",
                    "content": message
                }
            }
        }

        callback(null, response)


    }).catch(err => callback(err))

}

This is some example backend logic currently running on our chatbot. When a particular intent is ready to be fulfilled, the chatbot passes a message to this Lambda function written in JavaScript. The script analyzes the course information stored in the chat slots, pulls in a huge array of course data from a JSON file stored in S3, then looks for areas where the course data matches the user input.

The Lambda function then creates a JSON response with a message to the user that it passes back off to the chatbot. While this is a pretty simple MVP for this concept, hopefully you can see that there really isn’t a limit to the sophistication of the types of tasks we can complete using the conversational interface.

At the same time, while the Natural Language Processing used by the chat bot is impressive to someone who’s spent years studying syntax and semantics, bots are not some magical box that will make things happen on their own.

All of the examples created by Amazon exhibit complex application logic that tells the chatbot how to respond based on user input. The chatbot does a good job of analyzing human utterances and saying “Hey, it sounds like they want to do X.” However, even getting this right requires a lot of human intervention throughout the process.

Making a Chatbot with Amazon LEX

Hopefully, this post will be a helpful introduction for some folks interested in the latest chatbot craze, but at the same time I also hope that this will underscore the limits of the ‘AI’ products being touted at present. Creating even a moderately functional chatbot requires much more human involvement than anyone proclaiming wizardry will want to admit, so don’t ignore the man behind the curtain.

The post Making a Chatbot with Amazon LEX appeared first on Jeff Everhart.

Resetting Triggers in Google Apps Script

Resetting Triggers in Google Apps Script

Occasionally, when you are running a Google Script attached to a spreadsheet or document, the triggers that run those scripts can sometimes start to malfunction. Over the last five years, I haven’t been able to identify meaningful patterns for why these triggers and the associated scripts fail, but it just happens sometimes.

However, here are a couple of things to look out for when using a running Google Script:

  1. If you make changes to the underlying sheet, form, or document, things sometimes break. That can mean changing a setting, adding a form field, or moving a row/column. In my experience, this has been the hardest thing to identify.
  2. If Google changes something or updates, your stuff can break. Each of these scripts relies on conventions associated with the GSuite ecosystem, so there can be ripple effects if one service changes how it does business.
  3. Script authorization matters. If you are sharing a sheet and script among people, sometimes this can authorize the script to do funky things. For example, we all might share access to a Google sheet, but the script is authorized to take actions (send email, modify the sheet, etc.) as one person, the original Google account that authorized the script. Having other authenticated users interact with the script tends to make things go sideways.

The Good News

The good news is that these issues are most likely resolved by just resetting the projects associated triggers, and in some severe cases, creating a duplicate script entirely. The good thing is if something starts to error out in a Google Script you own, you should get a notification that looks like this:

In most cases, this will give you some additional information about why and where the script is failing.

Below are the steps to take when you need to reset a project’s trigger.

Reset a Trigger on a Google Script

The first thing we need to do is open up the script editor from the Google Sheet or Doc that the script is attached to. To do that, open up the tools menu and click ‘Script Editor,’ which will open up the script editor in a new window:

Resetting Triggers in Google Apps Script

 

After you’ve opened the script editor, you’ll need to locate the projects triggers and then modify them. In the Script Editor menu, open the ‘Edit’ menu and then locate the dropdown option labeled ‘Current project’s triggers.’

Resetting Triggers in Google Apps Script

When you click ‘Current project’s triggers,’ it will open up a pop-up. If you don’t have any triggers set, you can set one now. However, if you have an old trigger that is failing, you’ll want to click the ‘X’ button next to the trigger to remove it from the current project. Once you’ve done that make sure to click ‘Save’ and close the popup menu.

Resetting Triggers in Google Apps Script

 

Now that we’ve cleared out our failing trigger, you can add a new one. Reopen the current project’s triggers menu, then click the link labeled ‘No triggers set up. Click here to add one now.’ 

That will give you a set of menus that look just like the options in the image above. GSuite apps support a lot of installable triggers, but in my experience most are either time-driven or used to trigger something when someone submits a form. If that’s the case, where you want to listen for a form submission and then take some action, the configuration above is pretty much the default for that scenario.

Just remember to click ‘Save’ when you add the trigger before closing the menu, and be sure that the function you want to run is selected in the first drop down menu.

 

 

The post Resetting Triggers in Google Apps Script appeared first on Jeff Everhart.

Resetting Triggers in Google Apps Script

Resetting Triggers in Google Apps Script

Occasionally, when you are running a Google Script attached to a spreadsheet or document, the triggers that run those scripts can sometimes start to malfunction. Over the last five years, I haven’t been able to identify meaningful patterns for why these triggers and the associated scripts fail, but it just happens sometimes.

However, here are a couple of things to look out for when using a running Google Script:

  1. If you make changes to the underlying sheet, form, or document, things sometimes break. That can mean changing a setting, adding a form field, or moving a row/column. In my experience, this has been the hardest thing to identify.
  2. If Google changes something or updates, your stuff can break. Each of these scripts relies on conventions associated with the GSuite ecosystem, so there can be ripple effects if one service changes how it does business.
  3. Script authorization matters. If you are sharing a sheet and script among people, sometimes this can authorize the script to do funky things. For example, we all might share access to a Google sheet, but the script is authorized to take actions (send email, modify the sheet, etc.) as one person, the original Google account that authorized the script. Having other authenticated users interact with the script tends to make things go sideways.

The Good News

The good news is that these issues are most likely resolved by just resetting the projects associated triggers, and in some severe cases, creating a duplicate script entirely. The good thing is if something starts to error out in a Google Script you own, you should get a notification that looks like this:

In most cases, this will give you some additional information about why and where the script is failing.

Below are the steps to take when you need to reset a project’s trigger.

Reset a Trigger on a Google Script

The first thing we need to do is open up the script editor from the Google Sheet or Doc that the script is attached to. To do that, open up the tools menu and click ‘Script Editor,’ which will open up the script editor in a new window:

Resetting Triggers in Google Apps Script

 

After you’ve opened the script editor, you’ll need to locate the projects triggers and then modify them. In the Script Editor menu, open the ‘Edit’ menu and then locate the dropdown option labeled ‘Current project’s triggers.’

Resetting Triggers in Google Apps Script

When you click ‘Current project’s triggers,’ it will open up a pop-up. If you don’t have any triggers set, you can set one now. However, if you have an old trigger that is failing, you’ll want to click the ‘X’ button next to the trigger to remove it from the current project. Once you’ve done that make sure to click ‘Save’ and close the popup menu.

Resetting Triggers in Google Apps Script

 

Now that we’ve cleared out our failing trigger, you can add a new one. Reopen the current project’s triggers menu, then click the link labeled ‘No triggers set up. Click here to add one now.’ 

That will give you a set of menus that look just like the options in the image above. GSuite apps support a lot of installable triggers, but in my experience most are either time-driven or used to trigger something when someone submits a form. If that’s the case, where you want to listen for a form submission and then take some action, the configuration above is pretty much the default for that scenario.

Just remember to click ‘Save’ when you add the trigger before closing the menu, and be sure that the function you want to run is selected in the first drop down menu.

 

 

The post Resetting Triggers in Google Apps Script appeared first on Jeff Everhart.

Query Timeout in MySQL Workbench | Error Code: 2013. Lost connection to MySQL server during query

Query Timeout in MySQL Workbench | Error Code: 2013. Lost connection to MySQL server during query

This is kind of a silly and duplicative post, but I spent too much time searching for the right answer, so maybe this will help the right course of action bubble to the top faster in the future.

The Issue

I was trying to run a query on my local SQL install (whatever MAMP manages and provisions) using MySQL Workbench 6.3 for Mac but kept getting a timeout error.

The query itself wasn’t overly complex, but I was using aggregate functions, group by, and a join to consolidate a dataset. I’m working with distance education reporting data for all U.S. colleges and universities from 2012-2015, so this join involved a 7K row table and another with 25K rows, so not inconsequential but also not BIG data level.

SELECT
STABBR as State,
EFDELEV as Level , 
SUM(EFDETOT) as Total_Distance,
SUM(EFDEEXC) as Exclusive_Distance,
SUM(EFDESOM) as Some_Distance,
SUM(EFDENON) as None_Distance

FROM hd2012 LEFT JOIN ef2012a_dist_rv
ON hd2012.UNITID = ef2012a_dist_rv.UNITID
GROUP BY State,  Level;

I did some initial googling on the error code, but it is a pretty general error code, so it was difficult to be sure whether this was a limitation of SQL or the Workbench DBMS. I read a few posts that suggested manipulating some of the .conf files for the underlying MySQL install, and I went too long down this road before trying something in Workbench itself.

It turns out there are timeout settings for the DBMS that you extend to make sure that it waits a sufficient amount of time for your query to return data. Thanks to this specific answer on StackOverflow, but the description of “how-to” it links to is no longer valid, hence this blog post.

The Fix

There is a quick setting in Preferences that helped me. As you might expect, the DBMS has settings to manage its connection to the SQL server. In my case, those were just too short for my long running queries.

I changed the 30 second defaults to 180, and returned the data I needed. However, I’d imagine that some things would call for a much higher timeout, especially if you wanted to do a lot of transactions.

Query Timeout in MySQL Workbench | Error Code: 2013. Lost connection to MySQL server during query

 

 

 

 

The post Query Timeout in MySQL Workbench | Error Code: 2013. Lost connection to MySQL server during query appeared first on Jeff Everhart.

Query Timeout in MySQL Workbench | Error Code: 2013. Lost connection to MySQL server during query

Query Timeout in MySQL Workbench | Error Code: 2013. Lost connection to MySQL server during query

This is kind of a silly and duplicative post, but I spent too much time searching for the right answer, so maybe this will help the right course of action bubble to the top faster in the future.

The Issue

I was trying to run a query on my local SQL install (whatever MAMP manages and provisions) using MySQL Workbench 6.3 for Mac but kept getting a timeout error.

The query itself wasn’t overly complex, but I was using aggregate functions, group by, and a join to consolidate a dataset. I’m working with distance education reporting data for all U.S. colleges and universities from 2012-2015, so this join involved a 7K row table and another with 25K rows, so not inconsequential but also not BIG data level.

SELECT
STABBR as State,
EFDELEV as Level , 
SUM(EFDETOT) as Total_Distance,
SUM(EFDEEXC) as Exclusive_Distance,
SUM(EFDESOM) as Some_Distance,
SUM(EFDENON) as None_Distance

FROM hd2012 LEFT JOIN ef2012a_dist_rv
ON hd2012.UNITID = ef2012a_dist_rv.UNITID
GROUP BY State,  Level;

I did some initial googling on the error code, but it is a pretty general error code, so it was difficult to be sure whether this was a limitation of SQL or the Workbench DBMS. I read a few posts that suggested manipulating some of the .conf files for the underlying MySQL install, and I went too long down this road before trying something in Workbench itself.

It turns out there are timeout settings for the DBMS that you extend to make sure that it waits a sufficient amount of time for your query to return data. Thanks to this specific answer on StackOverflow, but the description of “how-to” it links to is no longer valid, hence this blog post.

The Fix

There is a quick setting in Preferences that helped me. As you might expect, the DBMS has settings to manage its connection to the SQL server. In my case, those were just too short for my long running queries.

I changed the 30 second defaults to 180, and returned the data I needed. However, I’d imagine that some things would call for a much higher timeout, especially if you wanted to do a lot of transactions.

Query Timeout in MySQL Workbench | Error Code: 2013. Lost connection to MySQL server during query

 

 

 

 

The post Query Timeout in MySQL Workbench | Error Code: 2013. Lost connection to MySQL server during query appeared first on Jeff Everhart.

Extending WP REST API Index Route Response

Extending WP REST API Index Route Response

This should be a fairly quick blog post, but it should help some folks out if they are looking to extend the WP Rest API index route to include some additional fields. First, let’s clear up what I mean by index route. 

For the purpose of this post, we are considering the index route to be whatever is returned when you request the /wp-json/ endpoint, not the fully qualified namespace /wp-json/wp/v2, which returns route information about the site. The /wp-json endpoint returns some of that information as well, but it also includes some details about the site itself. 

We’re prototyping some additional ways to aggregate student portfolio pages/posts through the API, and we need to eventually develop a plugin to add some additional site-level settings that will help to structure the various portfolio views. 

Either way, we needed to add some additional fields to the index response, but the WP docs aren’t that specific for this endpoint. Most of the existing resources on modifying responses out there deal with adding fields to existing objects like posts, pages, or comments. 

However, there isn’t an Index object that we can pass through, which lead us to some available filter options using the ‘rest_index’ filter. From there, we get access to the WP_REST_Response object and can modify it from there. As it turns out, most of the important stuff going on happens on the WP_HTTP_Response object, so that is a better place to start if you’re looking to modify the response object in a meaningful way.  

At the end of the day, the data on the response object is just an associative array, and you can modify it as you would any other 2D array. Here is the code that should go in functions.php: 

<?php

function filterResponse($response){
   $data = $response->data;
   $data['extra_field'] = 'some data';
   $response->set_data($data);
   return $response;
}

add_filter('rest_index', 'filterResponse');

?>

 

The post Extending WP REST API Index Route Response appeared first on Jeff Everhart.

Using AmCharts with Vue and Webpack

Using AmCharts with Vue and Webpack

I finally swallowed the Webpack pill, mostly because I wanted to get the most out of single file Vue components for some new projects I’m working on, and Webpack is along for the ride.

Overall, it’s been a semi-frustrating but also instructive experience. Before this I never used ESLint or any other type of linting, and I’m still pretty on the fence about its usefulness outside of really large and complex projects, but it’s taught me a few things about what people consider ‘modern’ JavaScript to look like.

Webpack itself seems to be like Gulp on steroids and a bit more to wrap your head around. However, the biggest change it necessitates for me is adoption of ES6 module patterns. I’m familiar with this type of pattern from my work on Node projects, but its usage with single file Vue components was new to me.

More importantly, a lot of the previous patterns I’ve used to build things on the front end are now pretty much bunk. I’ll talk about a few ways to work around some module conventions if you are using a project that doesn’t support that method.

The Issue at Hand

In a lot of my previous projects, I handled dependency management in a more straightforward fashion. If the project was small, a few script tags in the body sufficed. If it was larger, I would use Gulp to minify and concatenate all of the files together into one bundle.

Webpack, on the hand, introduces the idea of the dependency graph and organizes your code so that each module is self-contained, and only has access to the code it needs to do its job. Overall, this promotes good design patterns and reduces the amount of bugs that can be introduced into your code.

While that’s all well and good, those benefits come at the cost of added complexity, especially if the code you use to build things doesn’t adhere to the module pattern. In this case, Amcharts is a library I use quite frequently for charting, and it wasn’t immediately compatible with Vue and Webpack.

The fix was fairly easy, but I figured I’d write a blog post to help further my own understanding as well as add to the searchable material that others might find useful.

Understanding the Module Pattern

To better understand how to make random things work with Webpack, let’s look at how it works and what it needs/expect to do what it does. Before advanced JS bundling, most library developers followed the practices outlined below:

window.sweetLibrary = {
      method: function () {
       console.log('sweet method')
      }, 
      property: 'sweet prop'
}


sweetLibaray.method()

You would define a global variable or attach an object to the global window object, which are both ways of saying make the library interface available to any JavaScript program executing in the same thread.

This is pretty straightforward, but not without its downsides. If we have a bunch of different libraries mucking around in the global scope, the potential for namespace collisions goes up. Lots of smart developers came up with ways to make this more sensible for larger-scale projects, mostly involving closures, but I won’t get into all that. There is a great article from the Free Code Camp community that sums up JS modules better than I could.

So, how is the module pattern different?

Instead of making our JS libraries available to the global scope, we can explicitly export objects and import them only when needed.

For example, we might define a locally scoped object, and then expose that through a module interface using module.exports:

//sweet-library.js

let sweetLibrary = {
    method: function () {
      console.log('sweet library')
    }, 
    prop: 'sweet prop'
}

module.exports = sweetLibrary

Here we are exporting an object, but we can really export anything we want. Once we’ve exported something, we can import or require that module in another piece of code to gain access to its functionality.

//sweet-component.js 
let sweetLibrary = require('sweet-library')
sweetLibrary.method()
//sweet method

//Or, we could use import 
import sweetLibrary from 'sweet-library'
sweetLibrary.method()
//sweet method

Mostly these two methods do the same thing by giving you access to another library or object through this export/import pattern. And, more importantly, this is the way the Webpack expects you to use JS code if you want it to be managed by the Webpack build process.

Getting Amcharts to Work (using non-module libraries with Webpack)

Back to my initial issue. I have this library I use a lot for charting, and want to use it within my Webpack project, so what to do once you find out Amcharts does not support the module pattern.

At first, I reverted back to including a script tag in the HTML file, hoping that my Vue code could grab the reference from the global scope. I’m sure that might have worked, but my new ESLinter gently reminded me that wasn’t a best practice. 😉

It’s worth noting that Amcharts published a quick tutorial on some of this, but it seemed to incur some additional costs to load images, but also didn’t work with ESLint because the AmSerial was never used even though it was imported. ESLint apparently is very picky about things.

So, I found this Github issue that pretty clearly stated Amcharts wasn’t going to support modules any time soon. I tried a few things listed in the comments, but no luck. However, there was one person who mentioned getting it to work with Vue and Webpack by working around with the window object.

AhHa! And sure enough that was the key to getting the best of both worlds. I ended up doing something like the code below:

import 'amcharts3/amcharts/amcharts'

var chart = window.AmCharts.makeChart('chartdiv', {
      config: 'config'
    }

The import statement indicates to Webpack that Amcharts is a dependency and needs to be included in the bundle, but since the Amcharts library doesn’t export anything using modules.export, there is no way to interface with it that way.

After digging into the Amcharts source code, I could easily see in the first few lines that it was modifying the global window object. So, by tapping into the window object myself, I was able to get everything work and pass the ESLint checks.

Maybe there is a better way to go about this type of thing, and I’m sure I’ll learn more as I use Webpack and Vue CLI more, but hopefully this will help some poor soul as they wade through the module murkiness.

 

The post Using AmCharts with Vue and Webpack appeared first on Jeff Everhart.

Outsmarting Google: Generating Download Links with Google App Script

Outsmarting Google: Generating Download Links with Google App Script

For the most part, I love working with Google App Script. The APIs are what you expect them to be. Most of the features are well-documented. Heck, I’ve even tried to build Google Sheets into a small relational database.

But after you’ve been around the block for awhile, you realize there is this odd black market of sorts built into Google App Script and the associated Drive services, things you can do that Google never really meant for you to do, or built in as a feature at some point but forgot about.

This post exposes one of those dirty back alleys you need to generate a download link for Google Documents.

The Scenario

In reality, this should have been a straightforward process. We were trying to loop through a directory structure and print out some data about each file into a Google Sheet. The Google Sheet would then just serve some JSON that a little front end app could consume to allow people to download, copy, or view Google Drive files in a custom way.

All of the looping part worked as expected, but for some reason a previous version of the download link was no longer working.

function execute(){
  //This is the top level folder
  var folderId = "FOLDER_ID_HERE"; 
  var folder = DriveApp.getFolderById(folderId);
  
  var sheet = SpreadsheetApp.getActiveSheet();
  //Append Headers to Sheet
  sheet.appendRow(["File Name", "Parent Folder Name", "URL", "Download Link", "Copy Link"]);
  //Call function to recurse through subfolders
  loopSubFolders(folder, sheet); 
}

function loopSubFolders(parentFolder, sheet){
  var subFolders = parentFolder.getFolders(); 
  listFilesInFolder(subFolders.next(), sheet); 
  while(subFolders.hasNext()){
    listFilesInFolder(subFolders.next(), sheet); 
  }
  
}

function listFilesInFolder(folder, sheet) {
//writes the headers for the spreadsheet
    var contents = folder.getFiles();  
    var cnt = 0;
    var file;

    while (contents.hasNext()) {
        var file = contents.next();
        cnt++;
// writes the various chunks to the spreadsheet- just delete anything you don't want
            data = [
                file.getName(),
                folder.getName(),
                file.getUrl(),
                file.getUrl().split('/edit')[0] + '/export?format=docx', 
                file.getUrl().split('/edit')[0] + '/copy'
            ];

            sheet.appendRow(data);

        

    };
};

Things started to get tricky when logging out the download URL. Google Apps Script makes it easy get the file URL using getUrl, but that is just a link to view the document. After some research, it seemed like getDownloadUrl might do the trick, but alas that didn’t work for any of the Google docs or the random files in the folder.

At some point in this process, I also decided it was a good idea to print out the files as binary blobs in the spreadsheet, which summarily crashed the sheet.

We were able to find some older tutorials that broke down some various link structures, but none of those seemed to work for the current Google Drive setup. At the end of the day, we kind of just started trying things based off the structure of the /copy link until we found something that stuck.

https://docs.google.com/documents/d/YOUR_DOCS_ID_HERE/export?format=pdf

However, this only seems to work for native Google Drive content types, and it needs an export format or otherwise the documents were downloading as HTML. Either way, bound to be useful to someone.

The post Outsmarting Google: Generating Download Links with Google App Script appeared first on Jeff Everhart.

Analyzing and Visualizing Networks

One of the current projects I’m working on involves building out some analytical tools that sit on top of an application that lets students track attendance at extra curricular events for a living and learning program for the daVinci Center. For most of the visualizations, I used amCharts to build out some nice looking and functional charts, but since this data set is pretty unique, I also wanted to explore some of the unique information available that other analytical tools might ignore.

After working together a pretty gnarly SQL query to expose all of the student attendance data through the WordPress REST API, I settled on creating a network graph that shows all of the co-attendance between the students at all of the events.

See the Pen Student Network Analysis by Jeff Everhart (@JEverhart383) on CodePen.

Overall, I was really happy with how this turned out for a few reasons. One of the key tenants of the da Vinci model and its living and learning programs is that the cross-pollination of ideas across disciplines is what leads to innovation.

While the program is open to anyone, at least to my knowledge, they focus on getting students from Business, Engineering, and the Arts to work together. Thus, by encoding those groups in the network graph by using color, we should be able to literally visualize the ways in which students are interacting across those boundaries.

More importantly, it can also help very quickly identify outliers that might not immediately be apparent in other forms, which we can see by the pair of students off in the corner. In a program where collaboration is encouraged, it might be worthwhile to check in on these folks to see what’s up.

Notable Algorithms and Such

Part of the reason that I wanted to do this visualization in the first place was because I’d never written code to piece together the nodes and edges of the network before. As with most things, I decided not to consult the oracles on StackOverflow immediately, and am happy to say that I came up with a working implementation without copying anything from anyone else.

Since this was a undirected graph, meaning there is no directionality associated with the links or edges between nodes, I needed to capture each unique occurrence of a pair of students attending the same event.

Here is what each attendance record looked like in simplified form:

{eventID:511, userEmail:"jeff@awesome.com", ...}

And here is what the finished data structure looked like before feeding it into D3:

let network = {
    "nodes": [{
        "id": "jeff@awesome.com", 
        "group": "Humanities & Sciences"
    }], 
    "links": [{
        "source":"jeff@awesome.com", 
        "target":"everhart@me.com", 
        "value":1
    }]
}

We have a network object, with properties for the nodes and links that both contain array of different objects. Each link object contains a source, target, and value. Since this is undirected, the source and target are sort of arbitrary, and the value specifies the number of times that those two people attended the same event. The value count was integral in helping to weight the links on the force directed graph so that students with more co-attendances are linked more tightly together.

At present, we have only about 50 or so records, but that number will easily quadruple by the end of the semester, so I was interested in making the code used to construct this network diagram as efficient as possible.

In the end, there is one section that amounts to O(n^2) runtime, but I was able to prevent a lot of loops within loops by making using of hash tables, or just plain old objects in JavaScript as my in between data structures.

If you’re interested in looking at that code, you can take a look at the source on GitHub.

 

The post Analyzing and Visualizing Networks appeared first on Jeff Everhart.

Contact us

Academic Learning Transformation Lab - ALT Lab
1000 Floyd Ave, Suite 4102 | Richmond, Virginia 23284
altlab@vcu.edu | 804-827-5181

Last updated: September 26, 2017

Virginia Commonwealth University