Category Archives: thirdspace

Making a Chatbot with Amazon LEX

Making a Chatbot with Amazon LEX

What follows here is an exploration of an evolving project I’m working on to provide some additional touch points for current and prospective students in online courses at VCU.

Chatbots, AI, Machine Learning, and other terms with similar connotations seem to be all the rage nowadays, but using publicly available cloud services, we can get pretty close to creating some powerful new tools.

What is a Chatbot?

First, let’s get this definition out of the way. Amazon bills its Lex service as “a service for building conversational interfaces into any application using voice and text.”

And this is a pretty good way of thinking about what a chatbot really is, an interface. At the end of the day, most of us don’t talk or write just for our own enjoyment, we do so to produce results, get information, or make something happen.

With a chatbot, or conversational interface, we can allow people to arrive at those ends using natural language instead of an interface that I might construct out of buttons and form fields.

However, the metaphor of the interface is pretty apt here, as we are still essentially inputing data by talking or typing and getting data back from some backend service.

Making a Chatbot with Amazon LEX

While the chatbot becomes our interface, we can interact with that chatbot over a number of different channels using (almost) turn key integrations into Amazon Lex. For example, I was able to make our chatbot available through Twilio SMS and Slack in a few hours.

We can also easily have our chatbot interact with any backend services that we want to provide people with the answers they want. On the other end of the spectrum, there are also mechanisms for the human to oversee the bot’s responses, enabling the humans the bot is meant to replace to further train the bot.

Getting Started with a Chatbot using Amazon Lex

This post isn’t going to give a step-by-step guide, as Amazon has produced one of those that is pretty sufficient to get you going.

Rather, I’d like to start at a high level with what types of workflows and vocabulary you need to understand to build your own chatbot to solve whatever issues you are facing.

First, let’s start with the idea of intent.

What do we want to do?

For Amazon Lex, an intent is a specific building block, but at a more philosophical level, most of our language has a specific intent associated with it, either implicitly carried in the utterance (linguist speak for some act of language production, e.g. writing, speaking) itself or explicitly stated.

To start building a chatbot, you need to settle on some intents and then decide what types of utterances people would use to express such an intent.

For example, maybe we have an intent where someone wants information about online courses:

Making a Chatbot with Amazon LEX

Once we decide on the broad stroke of an intent, we need to then add some sample utterances that might express that intent in various ways. This is where the AI takes over and then creates models based on your samples so that any similar phrases trigger the same intent.

It’s worth noting here that there is an ongoing process to refine these prototype utterances. People always interact with systems in unexpected ways, and conversation is perhaps more fraught with those ambiguities. AWS allows you to look at the actual utterances people have used with your bot and add them to intents if they made your bot go WTF!? the first time around.

What do we need to know to help you fulfill intent?

From here we can start by talking about Types, or as Amazon Lex calls them, Slot Types. Slot types are the nouns or adjectives that we need to begin to fulfill the user’s intent.

For example, there are 100’s of online courses at VCU offered each semester, so we need to get some additional information using some prompts meant to elicit specific information.

Making a Chatbot with Amazon LEX

In my example, we need to know something about the course type, e.g. title, subject, discipline, and what level the student is studying at, e.g. undergraduate, graduate.

Here we can mark certain slots as required and provide some prompts for the chatbot to use as it negotiates with the user to get this additional information. Here we have the option of letting Lex intuit the nouns or adjectives necessary for our slots, or we can specify values that the bot will accept, i.e. only allow large, medium, and small as possible values for a PizzaSize slot.

Once all of the required slots are filled into the chatbot session, we can write some additional backend logic to fulfill the users intent.

Give the People What they Want

For each intent you create for your chatbot, you can choose what it means to fulfill a request and how exactly that might fit into the flow of your other systems.

For example, if each request will get the same response, we can write canned messages that get shown when certain intents are fulfilled. But we can also use the chatbot as an interface to a larger system that might get additional information, create an appointment, or order a pizza.

For that reason, the conversational interface that chatbots represent will continue to grow in usefulness. For AWS Lex, this typically means using a Lambda function to connect to these other systems.

While risking over-hype here, talking about chatbots and serverless code in the same blog post, this was something I got running in an hour or so:

const AWS = require('aws-sdk')
const S3 = new AWS.S3({
    maxRetries: 0,
    region: 'us-east-1',
})

const getCourses = () => {
    return new Promise( (resolve, reject) => {
        S3.getObject({
            Bucket: 'your-bucket-here',
            Key: 'your-file.json'
        }, function (err, data){
            if (err !== null){
                reject(err)
            }
            resolve(JSON.parse(data.Body.toString('utf-8')))
        })
    })
}

const processRequest = (request, callback) => {

    const CourseType = request.currentIntent.slots.CourseType
    const StudentLevel = request.currentIntent.slots.StudentLevel


    getCourses().
    then( data => {
        let message
        let courses = data.data.filter(course => {
            return (course.subject_desc.toLowerCase().includes(CourseType))
        })

        if (!courses.length > 0){
            message = "Sorry, we couldn't find any courses in that discipline or at that level. Let me know if want me to look again for something else."
        } else {
            message = `We found ${courses.length} courses: \n
                ${courses.map(course => `${course.subject} ${course.course_number}-${course.section}: ${course.title}`).join('\n')}
            `
        }

        let response = {
            "dialogAction": {
                "type": "Close",
                "fulfillmentState": "Fulfilled",
                "message": {
                    "contentType": "PlainText",
                    "content": message
                }
            }
        }

        callback(null, response)


    }).catch(err => callback(err))

}

This is some example backend logic currently running on our chatbot. When a particular intent is ready to be fulfilled, the chatbot passes a message to this Lambda function written in JavaScript. The script analyzes the course information stored in the chat slots, pulls in a huge array of course data from a JSON file stored in S3, then looks for areas where the course data matches the user input.

The Lambda function then creates a JSON response with a message to the user that it passes back off to the chatbot. While this is a pretty simple MVP for this concept, hopefully you can see that there really isn’t a limit to the sophistication of the types of tasks we can complete using the conversational interface.

At the same time, while the Natural Language Processing used by the chat bot is impressive to someone who’s spent years studying syntax and semantics, bots are not some magical box that will make things happen on their own.

All of the examples created by Amazon exhibit complex application logic that tells the chatbot how to respond based on user input. The chatbot does a good job of analyzing human utterances and saying “Hey, it sounds like they want to do X.” However, even getting this right requires a lot of human intervention throughout the process.

Making a Chatbot with Amazon LEX

Hopefully, this post will be a helpful introduction for some folks interested in the latest chatbot craze, but at the same time I also hope that this will underscore the limits of the ‘AI’ products being touted at present. Creating even a moderately functional chatbot requires much more human involvement than anyone proclaiming wizardry will want to admit, so don’t ignore the man behind the curtain.

The post Making a Chatbot with Amazon LEX appeared first on Jeff Everhart.

Making a Chatbot with Amazon LEX

Making a Chatbot with Amazon LEX

What follows here is an exploration of an evolving project I’m working on to provide some additional touch points for current and prospective students in online courses at VCU.

Chatbots, AI, Machine Learning, and other terms with similar connotations seem to be all the rage nowadays, but using publicly available cloud services, we can get pretty close to creating some powerful new tools.

What is a Chatbot?

First, let’s get this definition out of the way. Amazon bills its Lex service as “a service for building conversational interfaces into any application using voice and text.”

And this is a pretty good way of thinking about what a chatbot really is, an interface. At the end of the day, most of us don’t talk or write just for our own enjoyment, we do so to produce results, get information, or make something happen.

With a chatbot, or conversational interface, we can allow people to arrive at those ends using natural language instead of an interface that I might construct out of buttons and form fields.

However, the metaphor of the interface is pretty apt here, as we are still essentially inputing data by talking or typing and getting data back from some backend service.

Making a Chatbot with Amazon LEX

While the chatbot becomes our interface, we can interact with that chatbot over a number of different channels using (almost) turn key integrations into Amazon Lex. For example, I was able to make our chatbot available through Twilio SMS and Slack in a few hours.

We can also easily have our chatbot interact with any backend services that we want to provide people with the answers they want. On the other end of the spectrum, there are also mechanisms for the human to oversee the bot’s responses, enabling the humans the bot is meant to replace to further train the bot.

Getting Started with a Chatbot using Amazon Lex

This post isn’t going to give a step-by-step guide, as Amazon has produced one of those that is pretty sufficient to get you going.

Rather, I’d like to start at a high level with what types of workflows and vocabulary you need to understand to build your own chatbot to solve whatever issues you are facing.

First, let’s start with the idea of intent.

What do we want to do?

For Amazon Lex, an intent is a specific building block, but at a more philosophical level, most of our language has a specific intent associated with it, either implicitly carried in the utterance (linguist speak for some act of language production, e.g. writing, speaking) itself or explicitly stated.

To start building a chatbot, you need to settle on some intents and then decide what types of utterances people would use to express such an intent.

For example, maybe we have an intent where someone wants information about online courses:

Making a Chatbot with Amazon LEX

Once we decide on the broad stroke of an intent, we need to then add some sample utterances that might express that intent in various ways. This is where the AI takes over and then creates models based on your samples so that any similar phrases trigger the same intent.

It’s worth noting here that there is an ongoing process to refine these prototype utterances. People always interact with systems in unexpected ways, and conversation is perhaps more fraught with those ambiguities. AWS allows you to look at the actual utterances people have used with your bot and add them to intents if they made your bot go WTF!? the first time around.

What do we need to know to help you fulfill intent?

From here we can start by talking about Types, or as Amazon Lex calls them, Slot Types. Slot types are the nouns or adjectives that we need to begin to fulfill the user’s intent.

For example, there are 100’s of online courses at VCU offered each semester, so we need to get some additional information using some prompts meant to elicit specific information.

Making a Chatbot with Amazon LEX

In my example, we need to know something about the course type, e.g. title, subject, discipline, and what level the student is studying at, e.g. undergraduate, graduate.

Here we can mark certain slots as required and provide some prompts for the chatbot to use as it negotiates with the user to get this additional information. Here we have the option of letting Lex intuit the nouns or adjectives necessary for our slots, or we can specify values that the bot will accept, i.e. only allow large, medium, and small as possible values for a PizzaSize slot.

Once all of the required slots are filled into the chatbot session, we can write some additional backend logic to fulfill the users intent.

Give the People What they Want

For each intent you create for your chatbot, you can choose what it means to fulfill a request and how exactly that might fit into the flow of your other systems.

For example, if each request will get the same response, we can write canned messages that get shown when certain intents are fulfilled. But we can also use the chatbot as an interface to a larger system that might get additional information, create an appointment, or order a pizza.

For that reason, the conversational interface that chatbots represent will continue to grow in usefulness. For AWS Lex, this typically means using a Lambda function to connect to these other systems.

While risking over-hype here, talking about chatbots and serverless code in the same blog post, this was something I got running in an hour or so:

const AWS = require('aws-sdk')
const S3 = new AWS.S3({
    maxRetries: 0,
    region: 'us-east-1',
})

const getCourses = () => {
    return new Promise( (resolve, reject) => {
        S3.getObject({
            Bucket: 'your-bucket-here',
            Key: 'your-file.json'
        }, function (err, data){
            if (err !== null){
                reject(err)
            }
            resolve(JSON.parse(data.Body.toString('utf-8')))
        })
    })
}

const processRequest = (request, callback) => {

    const CourseType = request.currentIntent.slots.CourseType
    const StudentLevel = request.currentIntent.slots.StudentLevel


    getCourses().
    then( data => {
        let message
        let courses = data.data.filter(course => {
            return (course.subject_desc.toLowerCase().includes(CourseType))
        })

        if (!courses.length > 0){
            message = "Sorry, we couldn't find any courses in that discipline or at that level. Let me know if want me to look again for something else."
        } else {
            message = `We found ${courses.length} courses: \n
                ${courses.map(course => `${course.subject} ${course.course_number}-${course.section}: ${course.title}`).join('\n')}
            `
        }

        let response = {
            "dialogAction": {
                "type": "Close",
                "fulfillmentState": "Fulfilled",
                "message": {
                    "contentType": "PlainText",
                    "content": message
                }
            }
        }

        callback(null, response)


    }).catch(err => callback(err))

}

This is some example backend logic currently running on our chatbot. When a particular intent is ready to be fulfilled, the chatbot passes a message to this Lambda function written in JavaScript. The script analyzes the course information stored in the chat slots, pulls in a huge array of course data from a JSON file stored in S3, then looks for areas where the course data matches the user input.

The Lambda function then creates a JSON response with a message to the user that it passes back off to the chatbot. While this is a pretty simple MVP for this concept, hopefully you can see that there really isn’t a limit to the sophistication of the types of tasks we can complete using the conversational interface.

At the same time, while the Natural Language Processing used by the chat bot is impressive to someone who’s spent years studying syntax and semantics, bots are not some magical box that will make things happen on their own.

All of the examples created by Amazon exhibit complex application logic that tells the chatbot how to respond based on user input. The chatbot does a good job of analyzing human utterances and saying “Hey, it sounds like they want to do X.” However, even getting this right requires a lot of human intervention throughout the process.

Making a Chatbot with Amazon LEX

Hopefully, this post will be a helpful introduction for some folks interested in the latest chatbot craze, but at the same time I also hope that this will underscore the limits of the ‘AI’ products being touted at present. Creating even a moderately functional chatbot requires much more human involvement than anyone proclaiming wizardry will want to admit, so don’t ignore the man behind the curtain.

The post Making a Chatbot with Amazon LEX appeared first on Jeff Everhart.

Weekly Web Harvest for 2017-12-03

  • Kleptocrat

    Kleptocrat is a unique game of strategy and tactics, based on real-life patterns of money laundering and offshore structuring that have been used by actual corrupt public officials… that is, until they got caught.

    This game, created by The Mintz Group, a global investigative firm that specializes in tracing assets, offers you insight into the strategies of the corrupt, as well as those who are trying to bring them to justice.

  • Millions Are Hounded for Debt They Don’t Owe. One Victim Fought Back, With a Vengeance – Bloomberg

    He started a spreadsheet, Scums.xlsx, to keep track.

  • xkcd: Bad Code

    “it just looks bad because it’s a spreadsheet formula”

  • Hitting Reset, Knewton Tries New Strategy: Competing With Textbook Publishers | EdSurge News

    The secret to its swift entry into publishing was OER (open education resources). Rather than hire authors to write textbooks from scratch, the company is now curating open-educational materials already on the internet.

  • Firefighters attempt to contain Bel-Air blaze ahead of the strong winds expected Thursday night – LA Times

    The Los Angeles Police Department asked drivers to avoid navigation apps, which are steering users onto more open routes — in this case, streets in the neighborhoods that are on fire. 

  • Food taboos: their origins and purposes

    the Ache people, i.e., hunters and gatherers of the Paraguayan jungle. According to Hill and Hurtado [6], the tropical forests of the Ache habitat abound with several hundreds of edible mammalian, avian, reptilian, amphibian and piscine species, yet the Ache exploit only 50 of them. Turning to the plants, fruits, and insects the situation is no different, because only 40 of them are exploited. Ninety eight percent of the calories in the diet of the Ache are supplied by only seventeen different food sources.

  • Ants, not evil spirits, create poisonous devil’s gardens in the Amazon rainforest

    “Devil’s gardens are large stands of trees in the Amazonian rainforest that consist almost entirely of a single species, Duroia hirsuta, and, according to local legend, are cultivated by an evil forest spirit,” write Frederickson and her colleagues in Nature. “Here we show that the ant, Myrmelachista schumanni, which nests in D. hirsuta stems, creates devil’s gardens by poisoning all plants except its hosts with formic acid. By killing other plants, M. schumanni provides its colonies with abundant nest sites—a long-lasting benefit, as colonies can live for 800 years.”

  • Bitcoin could cost us our clean-energy future | Grist

    By July 2019, the bitcoin network will require more electricity than the entire United States currently uses. By February 2020, it will use as much electricity as the entire world does today.

  • Everybody Lies: FBI Edition | Popehat

    When an FBI agent is interviewing you, assume that that agent is exquisitely prepared. They probably already have proof about the answer of half the questions they’re going to ask you. They have the receipts. They’ve listened to the tapes. They’ve read the emails. Recently. You, on the other hand, haven’t thought about Oh Yeah That Thing for months or years, and you routinely forget birthdays and names and whether you had a doctor’s appointment today and so forth. So, if you go in with “I’ll just tell the truth,” you’re going to start answering questions based on your cold-memory unrefreshed holistic general concept of the subject, like an impressionistic painting by a dim third-grader. Will you say “I really don’t remember” or “I would have to look at the emails” or “I’m not sure”? That would be smart. But we’ve established you’re not smart, because you’ve set out to tell the truth to the FBI.

Weekly Web Harvest for 2017-11-26

  • This Magical Software Makes Facebook Profile Pictures Come Alive

    “What Facebook will do with this–I don’t know.”
    It’s reasonable to imagine that Facebook would like to incorporate such a fun feature into its platform as soon as possible.

    h/t Matt

  • Syndicating annotations – Jon Udell

    Although it sprang to life to support ebooks, I think this mechanism will prove more broadly useful. Unlike PDF fingerprints and DOIs, which typically identify whole works, it can be used to name chapters and sections. At a conference last year we spoke with OER (open educational resource) publishers, including Pressbooks, about ways to coalesce annotations across their platforms. I’m not sure this approach is the final solution, but it’s usable now, and I hope pioneers like Steel Wagstaff will try it out and help us think through the implications.

  • For Example
  • How Far Will Sean Hannity Go? – The New York Times

    Until a few years ago, the staff of “Hannity,” the top nightly cable show in the United States, shared news by text or email, but today, much of the collaborative work is handled via a Twitter account accessible to only the staff. “If I like something, I’ll click Like, and if other producers like something, they’ll click Like,” Berry told me. The result is a “pool of ideas” — “50, 60, 70 stories,” in addition to articles Hannity himself has flagged for inclusion. “You’ve got to pull it all together,” Berry added. “Build that argument.” Soon, a few top contenders had emerged, among them a Facebook comment from a CBS executive, Hayley Geftman-Gold, who wrote that she was “not even sympathetic” because “country-music fans often are Republican gun toters.”

  • What city is the microbrew capital of the US?

    Using scrollama (or something close to that.

    h/t Jeff

Weekly Web Harvest for 2017-11-26

  • This Magical Software Makes Facebook Profile Pictures Come Alive

    “What Facebook will do with this–I don’t know.”
    It’s reasonable to imagine that Facebook would like to incorporate such a fun feature into its platform as soon as possible.

    h/t Matt

  • Syndicating annotations – Jon Udell

    Although it sprang to life to support ebooks, I think this mechanism will prove more broadly useful. Unlike PDF fingerprints and DOIs, which typically identify whole works, it can be used to name chapters and sections. At a conference last year we spoke with OER (open educational resource) publishers, including Pressbooks, about ways to coalesce annotations across their platforms. I’m not sure this approach is the final solution, but it’s usable now, and I hope pioneers like Steel Wagstaff will try it out and help us think through the implications.

  • For Example
  • How Far Will Sean Hannity Go? – The New York Times

    Until a few years ago, the staff of “Hannity,” the top nightly cable show in the United States, shared news by text or email, but today, much of the collaborative work is handled via a Twitter account accessible to only the staff. “If I like something, I’ll click Like, and if other producers like something, they’ll click Like,” Berry told me. The result is a “pool of ideas” — “50, 60, 70 stories,” in addition to articles Hannity himself has flagged for inclusion. “You’ve got to pull it all together,” Berry added. “Build that argument.” Soon, a few top contenders had emerged, among them a Facebook comment from a CBS executive, Hayley Geftman-Gold, who wrote that she was “not even sympathetic” because “country-music fans often are Republican gun toters.”

  • What city is the microbrew capital of the US?

    Using scrollama (or something close to that.

    h/t Jeff

Resetting Triggers in Google Apps Script

Resetting Triggers in Google Apps Script

Occasionally, when you are running a Google Script attached to a spreadsheet or document, the triggers that run those scripts can sometimes start to malfunction. Over the last five years, I haven’t been able to identify meaningful patterns for why these triggers and the associated scripts fail, but it just happens sometimes.

However, here are a couple of things to look out for when using a running Google Script:

  1. If you make changes to the underlying sheet, form, or document, things sometimes break. That can mean changing a setting, adding a form field, or moving a row/column. In my experience, this has been the hardest thing to identify.
  2. If Google changes something or updates, your stuff can break. Each of these scripts relies on conventions associated with the GSuite ecosystem, so there can be ripple effects if one service changes how it does business.
  3. Script authorization matters. If you are sharing a sheet and script among people, sometimes this can authorize the script to do funky things. For example, we all might share access to a Google sheet, but the script is authorized to take actions (send email, modify the sheet, etc.) as one person, the original Google account that authorized the script. Having other authenticated users interact with the script tends to make things go sideways.

The Good News

The good news is that these issues are most likely resolved by just resetting the projects associated triggers, and in some severe cases, creating a duplicate script entirely. The good thing is if something starts to error out in a Google Script you own, you should get a notification that looks like this:

In most cases, this will give you some additional information about why and where the script is failing.

Below are the steps to take when you need to reset a project’s trigger.

Reset a Trigger on a Google Script

The first thing we need to do is open up the script editor from the Google Sheet or Doc that the script is attached to. To do that, open up the tools menu and click ‘Script Editor,’ which will open up the script editor in a new window:

Resetting Triggers in Google Apps Script

 

After you’ve opened the script editor, you’ll need to locate the projects triggers and then modify them. In the Script Editor menu, open the ‘Edit’ menu and then locate the dropdown option labeled ‘Current project’s triggers.’

Resetting Triggers in Google Apps Script

When you click ‘Current project’s triggers,’ it will open up a pop-up. If you don’t have any triggers set, you can set one now. However, if you have an old trigger that is failing, you’ll want to click the ‘X’ button next to the trigger to remove it from the current project. Once you’ve done that make sure to click ‘Save’ and close the popup menu.

Resetting Triggers in Google Apps Script

 

Now that we’ve cleared out our failing trigger, you can add a new one. Reopen the current project’s triggers menu, then click the link labeled ‘No triggers set up. Click here to add one now.’ 

That will give you a set of menus that look just like the options in the image above. GSuite apps support a lot of installable triggers, but in my experience most are either time-driven or used to trigger something when someone submits a form. If that’s the case, where you want to listen for a form submission and then take some action, the configuration above is pretty much the default for that scenario.

Just remember to click ‘Save’ when you add the trigger before closing the menu, and be sure that the function you want to run is selected in the first drop down menu.

 

 

The post Resetting Triggers in Google Apps Script appeared first on Jeff Everhart.

Resetting Triggers in Google Apps Script

Resetting Triggers in Google Apps Script

Occasionally, when you are running a Google Script attached to a spreadsheet or document, the triggers that run those scripts can sometimes start to malfunction. Over the last five years, I haven’t been able to identify meaningful patterns for why these triggers and the associated scripts fail, but it just happens sometimes.

However, here are a couple of things to look out for when using a running Google Script:

  1. If you make changes to the underlying sheet, form, or document, things sometimes break. That can mean changing a setting, adding a form field, or moving a row/column. In my experience, this has been the hardest thing to identify.
  2. If Google changes something or updates, your stuff can break. Each of these scripts relies on conventions associated with the GSuite ecosystem, so there can be ripple effects if one service changes how it does business.
  3. Script authorization matters. If you are sharing a sheet and script among people, sometimes this can authorize the script to do funky things. For example, we all might share access to a Google sheet, but the script is authorized to take actions (send email, modify the sheet, etc.) as one person, the original Google account that authorized the script. Having other authenticated users interact with the script tends to make things go sideways.

The Good News

The good news is that these issues are most likely resolved by just resetting the projects associated triggers, and in some severe cases, creating a duplicate script entirely. The good thing is if something starts to error out in a Google Script you own, you should get a notification that looks like this:

In most cases, this will give you some additional information about why and where the script is failing.

Below are the steps to take when you need to reset a project’s trigger.

Reset a Trigger on a Google Script

The first thing we need to do is open up the script editor from the Google Sheet or Doc that the script is attached to. To do that, open up the tools menu and click ‘Script Editor,’ which will open up the script editor in a new window:

Resetting Triggers in Google Apps Script

 

After you’ve opened the script editor, you’ll need to locate the projects triggers and then modify them. In the Script Editor menu, open the ‘Edit’ menu and then locate the dropdown option labeled ‘Current project’s triggers.’

Resetting Triggers in Google Apps Script

When you click ‘Current project’s triggers,’ it will open up a pop-up. If you don’t have any triggers set, you can set one now. However, if you have an old trigger that is failing, you’ll want to click the ‘X’ button next to the trigger to remove it from the current project. Once you’ve done that make sure to click ‘Save’ and close the popup menu.

Resetting Triggers in Google Apps Script

 

Now that we’ve cleared out our failing trigger, you can add a new one. Reopen the current project’s triggers menu, then click the link labeled ‘No triggers set up. Click here to add one now.’ 

That will give you a set of menus that look just like the options in the image above. GSuite apps support a lot of installable triggers, but in my experience most are either time-driven or used to trigger something when someone submits a form. If that’s the case, where you want to listen for a form submission and then take some action, the configuration above is pretty much the default for that scenario.

Just remember to click ‘Save’ when you add the trigger before closing the menu, and be sure that the function you want to run is selected in the first drop down menu.

 

 

The post Resetting Triggers in Google Apps Script appeared first on Jeff Everhart.

The Toy Shovel

The Toy Shovel


The Eyes of A Child flickr photo by -Jeffrey- shared under a Creative Commons (BY-ND) license

Once upon a time there was a young human who loved the beach. She had a toy shovel that she used at the beach all the time. She used that shovel to dig holes and make sand castles. Many fond days at the beach were spent with that shovel.

This young human also had a dog. The dog did what dogs do. Her responsibility was to clean up the dog doo when the dog was done. She disliked this task intensely and would often complain about it.

“Eureka!”1 exclaimed her parental unit one day. “Our daughter loves her beach shovel! Let’s have her use that shovel to clean up the dog mess instead of using the big metal shovel.”

As you might guess, the daughter did not enjoy the shift in tools.

The beach shovel did not make cleaning up dog poo more pleasant. It actually made things worse. It was a poor fit for the unpleasant task compared to the traditional shovel.2 The beach shovel was also now contaminated in way that made her not want to use it at all anymore even when she was at the beach.


This is my over-simplified parable on using social media platforms in education.


1 Probably minored in Greek.

2 A long sturdy handle is what you want in this scenario.

The Toy Shovel

The Toy Shovel


The Eyes of A Child flickr photo by -Jeffrey- shared under a Creative Commons (BY-ND) license

Once upon a time there was a young human who loved the beach. She had a toy shovel that she used at the beach all the time. She used that shovel to dig holes and make sand castles. Many fond days at the beach were spent with that shovel.

This young human also had a dog. The dog did what dogs do. Her responsibility was to clean up the dog doo when the dog was done. She disliked this task intensely and would often complain about it.

“Eureka!”1 exclaimed her parental unit one day. “Our daughter loves her beach shovel! Let’s have her use that shovel to clean up the dog mess instead of using the big metal shovel.”

As you might guess, the daughter did not enjoy the shift in tools.

The beach shovel did not make cleaning up dog poo more pleasant. It actually made things worse. It was a poor fit for the unpleasant task compared to the traditional shovel.2 The beach shovel was also now contaminated in way that made her not want to use it at all anymore even when she was at the beach.


This is my over-simplified parable on using social media platforms in education.


1 Probably minored in Greek.

2 A long sturdy handle is what you want in this scenario.

Weekly Web Harvest for 2017-11-19

Contact us

Academic Learning Transformation Lab - ALT Lab
1000 Floyd Ave, Suite 4102 | Richmond, Virginia 23284
altlab@vcu.edu | 804-827-5181

Last updated: September 26, 2017

Virginia Commonwealth University