Author Archives: Tom Woodward

Google Form to Video Display

This is another SPLOT-ish tool that takes videos uploaded through a Google Form and then displays them.1 This particular proof of concept was built in about ten minutes as a result of a conversation with our World Language faculty who are going to be doing student-to-student video work with people from other countries. We’re likely to use Zoom as the platform. It makes recording the video easy and the compression of the video is pretty impressive. Initially I was concerned about file size but between that and seeing that Google allows up to 10GB file uploads via their vanilla forms. The caveat there is that the “file upload option is only available for G Suite customers with a Google Form shared within their organization.” Good enough for our purposes but a bit limited if you wanted to use it more broadly.

Form Stuff

I opted to auto-log the VCU email address since the file upload required that anyway and only used two other elements- a file upload piece and a single categorical element. You could get much more complex if you had the desire. Just make sure you’ve created a results spreadsheet and published it to the web.

The Pieces

My form is here.
The spreadsheet is here.
The JSON is here.

These pieces come together in the CodePen example here letting us display the videos and hide/show them by categories.

I’m going to break this down fairly thoroughly in that this might have value for people new to this and it’ll help me expose elements I don’t understand.

Get the Data

Once I’ve published the spreadsheet I always verify my URL to the JSON works. The only piece you’d need to change from below is the spreadsheetID. You can just cut/paste the URL from the console into the browser address bar to verify it works. You should see some JSON when you go there. I also recommend using the Chrome extension JSON formatter to clean up what you see. It’ll help you see the structure of the data.

var spreadsheetID = "1Bx5a84VLnwEa0i7Cj-8x0qhwMLmoPRmGr_4u1hRrU-I"; // Make sure it is published to the web
var dataURL = "https://spreadsheets.google.com/feeds/list/" +  spreadsheetID +  "/1/public/values?alt=json";
console.log(dataURL);

Once you’ve verified there is data to fetch, it’s time to get it. This little piece just makes sure there is data to get and that the data is JSON.

//get the data
fetch(dataURL).then(function(response) {
  var contentType = response.headers.get("content-type");//what kind of data is it?
  if (contentType && contentType.indexOf("application/json") !== -1) {
//if it's JSON return it and then . . . 
    return response.json().then(function(json) {
console.log(json);//spitting it out into console can let you see what it looks like in whole before you break it down -- same as at the URL we used above 

For Each Chunk of Data

Now that we’ve got a chunk of data, we need to find the elements we want and break down the pieces for display.

JSON display from spreadsheet

If you take a look at the JSON above, you’ll see that Google makes each row an entry. I collapsed a bunch of other information so you could see one full entry and the start of the second entry. They’re both children of the ‘feed’ element and they have their own sub-elements. To make this work we need to cycle through each entry.

You may notice that the entry element is followed by a square bracket which indicates the data inside it is an array – “entry”: [

That means we can count how many entries are there using array.length. In my case, I’ve set it so that the array is named data and address looping through it like so for (i = 0; i < data.length; i++) {stuff that is done}2 For each entry we’re going to want sub-elements like the video URL, the category, the email. Each element has a name gsx$uploadyourvideofile and the actual value is held in a subelement of that element called $t. So you get the actual information you want with the full path to that like “data[i].gsx$uploadyourvideofile.$t – you can see a couple different elements getting assigned to variables with that method in the portion below.

When you build a spreadsheet like this the questions from the form become the column titles and end up in the JSON as long strings without punctuation or spaces. So “What theme are you addressing?” in the form becomes the header for column C and then shows up in the JSON as gsx$whatthemeareyouaddressing which is ugly but works fine and will be left as-is to keep things simple.

 if (json.feed.entry) { //making sure we have at least one entry
        var data = json.feed.entry; //skipping down to the entry elements which gives us an array of entries that we'll navigate
        for (i = 0; i < data.length; i++) {
          var file = data[i].gsx$uploadyourvideofile.$t; //get the video file URL
          var email = data[i].gsx$emailaddress.$t; //gets the email
          var category = data[i].gsx$whatthemeareyouaddressing.$t; //gets the category

From Variables to HTML

Now that we’ve got the raw data assigned to variables we have to clean it up a bit and then write back out as HTML.

For instance, Google’s video embed function uses a different structure than the URL given in the spreadsheet. We really only want the ID from that URL. I assign the value of that URL to the file variable with var file = data[i].gsx$uploadyourvideofile.$t; and then chop off the front 33 characters that I don’t need with var fileId = file.substring(33);

To create the iframe structure I just append strings in the video variable while working in my clean file ID. I’m also sticking the category into a div and doing a few other things. It’s very much like a mail merge and it writes out all the HTML that’s ended up in the allFrames variable to the div with the id of videos.

for (i = 0; i < data.length; i++) {
          var file = data[i].gsx$uploadyourvideofile.$t; //get the video file URL
          var fileId = file.substring(33); //get just the ID from that URL
          var email = data[i].gsx$emailaddress.$t; //gets the email
          var category = data[i].gsx$whatthemeareyouaddressing.$t; //gets the category isn't that an ugly descriptor?
          category = category.replace(/\s/g, "-"); //get rid of any spaces in the category name
          category = category.toLowerCase(); //make the category lowercase because it's prettier and consistency is always a good thing
          var video =
            '<div class="' +
            category +
            ' video"><iframe src="https://drive.google.com/file/d/' +
            fileId +
            '/preview" width="640" height="480"></iframe><div class="info">' +
            category +
            " by " +
            email +
            "</div></div>";
          allFrames = allFrames+video; //build the iframe embed pattern that Google users and push it to the allFrame array to spit out below
        }
        document.getElementById("videos").innerHTML = allFrames; //write all the iframes to the videos div

That’s pretty much all there is to the display portion although it might be helpful to see it all together as it is in the codepen example.

See the Pen world language demo display from form by Tom (@twwoodward) on CodePen.


1 This one actually works consistently, unlike my recent attempt at silent form submissions– which I’ll return to at some point or another.

2 It might have made more sense to name it entry rather than data.

Google Form to Video Display

This is another SPLOT-ish tool that takes videos uploaded through a Google Form and then displays them.1 This particular proof of concept was built in about ten minutes as a result of a conversation with our World Language faculty who are going to be doing student-to-student video work with people from other countries. We’re likely to use Zoom as the platform. It makes recording the video easy and the compression of the video is pretty impressive. Initially I was concerned about file size but between that and seeing that Google allows up to 10GB file uploads via their vanilla forms. The caveat there is that the “file upload option is only available for G Suite customers with a Google Form shared within their organization.” Good enough for our purposes but a bit limited if you wanted to use it more broadly.

Form Stuff

I opted to auto-log the VCU email address since the file upload required that anyway and only used two other elements- a file upload piece and a single categorical element. You could get much more complex if you had the desire. Just make sure you’ve created a results spreadsheet and published it to the web.

The Pieces

My form is here.
The spreadsheet is here.
The JSON is here.

These pieces come together in the CodePen example here letting us display the videos and hide/show them by categories.

I’m going to break this down fairly thoroughly in that this might have value for people new to this and it’ll help me expose elements I don’t understand.

Get the Data

Once I’ve published the spreadsheet I always verify my URL to the JSON works. The only piece you’d need to change from below is the spreadsheetID. You can just cut/paste the URL from the console into the browser address bar to verify it works. You should see some JSON when you go there. I also recommend using the Chrome extension JSON formatter to clean up what you see. It’ll help you see the structure of the data.

var spreadsheetID = "1Bx5a84VLnwEa0i7Cj-8x0qhwMLmoPRmGr_4u1hRrU-I"; // Make sure it is published to the web
var dataURL = "https://spreadsheets.google.com/feeds/list/" +  spreadsheetID +  "/1/public/values?alt=json";
console.log(dataURL);

Once you’ve verified there is data to fetch, it’s time to get it. This little piece just makes sure there is data to get and that the data is JSON.

//get the data
fetch(dataURL).then(function(response) {
  var contentType = response.headers.get("content-type");//what kind of data is it?
  if (contentType && contentType.indexOf("application/json") !== -1) {
//if it's JSON return it and then . . . 
    return response.json().then(function(json) {
console.log(json);//spitting it out into console can let you see what it looks like in whole before you break it down -- same as at the URL we used above 

For Each Chunk of Data

Now that we’ve got a chunk of data, we need to find the elements we want and break down the pieces for display.

JSON display from spreadsheet

If you take a look at the JSON above, you’ll see that Google makes each row an entry. I collapsed a bunch of other information so you could see one full entry and the start of the second entry. They’re both children of the ‘feed’ element and they have their own sub-elements. To make this work we need to cycle through each entry.

You may notice that the entry element is followed by a square bracket which indicates the data inside it is an array – “entry”: [

That means we can count how many entries are there using array.length. In my case, I’ve set it so that the array is named data and address looping through it like so for (i = 0; i < data.length; i++) {stuff that is done}2 For each entry we’re going to want sub-elements like the video URL, the category, the email. Each element has a name gsx$uploadyourvideofile and the actual value is held in a subelement of that element called $t. So you get the actual information you want with the full path to that like “data[i].gsx$uploadyourvideofile.$t – you can see a couple different elements getting assigned to variables with that method in the portion below.

When you build a spreadsheet like this the questions from the form become the column titles and end up in the JSON as long strings without punctuation or spaces. So “What theme are you addressing?” in the form becomes the header for column C and then shows up in the JSON as gsx$whatthemeareyouaddressing which is ugly but works fine and will be left as-is to keep things simple.

 if (json.feed.entry) { //making sure we have at least one entry
        var data = json.feed.entry; //skipping down to the entry elements which gives us an array of entries that we'll navigate
        for (i = 0; i < data.length; i++) {
          var file = data[i].gsx$uploadyourvideofile.$t; //get the video file URL
          var email = data[i].gsx$emailaddress.$t; //gets the email
          var category = data[i].gsx$whatthemeareyouaddressing.$t; //gets the category

From Variables to HTML

Now that we’ve got the raw data assigned to variables we have to clean it up a bit and then write back out as HTML.

For instance, Google’s video embed function uses a different structure than the URL given in the spreadsheet. We really only want the ID from that URL. I assign the value of that URL to the file variable with var file = data[i].gsx$uploadyourvideofile.$t; and then chop off the front 33 characters that I don’t need with var fileId = file.substring(33);

To create the iframe structure I just append strings in the video variable while working in my clean file ID. I’m also sticking the category into a div and doing a few other things. It’s very much like a mail merge and it writes out all the HTML that’s ended up in the allFrames variable to the div with the id of videos.

for (i = 0; i < data.length; i++) {
          var file = data[i].gsx$uploadyourvideofile.$t; //get the video file URL
          var fileId = file.substring(33); //get just the ID from that URL
          var email = data[i].gsx$emailaddress.$t; //gets the email
          var category = data[i].gsx$whatthemeareyouaddressing.$t; //gets the category isn't that an ugly descriptor?
          category = category.replace(/\s/g, "-"); //get rid of any spaces in the category name
          category = category.toLowerCase(); //make the category lowercase because it's prettier and consistency is always a good thing
          var video =
            '<div class="' +
            category +
            ' video"><iframe src="https://drive.google.com/file/d/' +
            fileId +
            '/preview" width="640" height="480"></iframe><div class="info">' +
            category +
            " by " +
            email +
            "</div></div>";
          allFrames = allFrames+video; //build the iframe embed pattern that Google users and push it to the allFrame array to spit out below
        }
        document.getElementById("videos").innerHTML = allFrames; //write all the iframes to the videos div

That’s pretty much all there is to the display portion although it might be helpful to see it all together as it is in the codepen example.

See the Pen world language demo display from form by Tom (@twwoodward) on CodePen.


1 This one actually works consistently, unlike my recent attempt at silent form submissions– which I’ll return to at some point or another.

2 It might have made more sense to name it entry rather than data.

Weekly Web Harvest for 2017-07-09

Browse by Color in WordPress

Here’s a quick video of a browse-by-color example I made in about ten minutes this AM prior to a meeting with a faculty member in our fashion program.

This one is using FacetWP1 which is acting on a custom field I creatively named ‘color.’ It seems like it’ll be useful to some disciplines and we have the option to do lots of automated patterns using Color Thief to grab pallets.

The video is also one way we might start sharing examples of what we can do in various platforms. There’s significant need to show that internally and for external faculty so people get a better idea of their scope of options.


1 You could easily do this by hand now with the WP JSON data but it’d take a bit longer.

Community- Technically Speaking

Community- Technically Speaking


Playing “Mah-Jong” at the Clubhouse of the Century Village Retirement Community. flickr photo by The U.S. National Archives shared with no copyright restriction (Flickr Commons)

Marie has nice post summarizing the Georgetown Community presentation at Domains. And nowEvelyn’s post reminded me to write a post on a site instead of just in my head.

The title of the presentation ‘Just a Community Organizer’ is a nod to the fact that community is hard to do. It can be hard technically but it’s often even more difficult on the human side.

As Evelyn brought up . . . community is not created by the technical ability to bring content together. There are lots of ways this can succeed technically but fail socially–> The stuff is there but no one cares.

At the same time, technology failures can prevent community from forming where you have all the other factors–> People want to see what’s going on but can’t find and interact with the stuff they want in reasonable ways.

There’s also the idea that people might not know what they want to see (or how they want to see it) until it’s given as an option or scaffolded into as an action. Can we present content in ways that are novel and interesting that inspires curiosity and interaction? You can’t do that easily

There’ve been many attempts to build online community between individual sites1 throughout Internet history– web-rings being an early way of associating groups of like-minded individuals to create community (and viewers), various portals, but most(?) successful communities have occurred on single sites- places with much more standardized content in an entirely standardized platform (think MySpace, Tumblr, Facebook, Twitter etc.). Even Geocities attempted to group things by topic. We’ve moved towards expectations of trending hashtags or ‘on this day 1 year ago’ reconnections with associated ways to like/heart/repost/comment. That impacts what people expect from community sites and from content sites. These educationally focused community sites are both.

That moves our focus to the content. Are the things your students are making actually things people want to see? If your students hate to do it, your faculty hates to read it, it’d be crazy to think that other students would want to go comment on work they hate don’t care about.2 Mike Wesch recently got 600,000 views on assignment six in the ANTH101 course. Not that 600,000 is the goal but some audience is. Who cares about this work?

The complex thing about the content/practice/technical entwinement is that all of them impact one another and it will take a chunk of energy to get things flowing. It might take a smaller degree of energy to keep it flowing. In any case, it’s unlikely to just happen. It’s unlikely to happen without stewardship and effort, consideration and revision. There are no perpetual motion machines.3

Multiple Goals and Points of Friction

The thing about most community sites is they usually have multiple purposes. There is the desire to aggregate content to look at the program as a whole (often for accreditation purposes). There are promotional4 aspirations where showing top content might influence students choosing to enter the program, might influence how students in the program consider their own work, might influence how faculty consider the work in their own classes. There are additional considerations around communities of learners and how spaces like this might make, expand, and deepen the connections between them.

Often one site is made to do some of all that but in lack of focus it usually doesn’t do any of it very well.

The other piece that bogs stuff like this down is friction. You must figure out how people do things and decide on the smallest changes possible. It would be best if this results in nearly immediate positive reinforcement. This plays into things like identifying content you want to highlight. Think about how you read content, how other faculty read content. Figure out smooth workflows. You may need a few. What is the absolute least work people can do and how close to what they already do can you make the process?

Structured Data – Love All/Serve All

So now that I’ve rambled for way too long, let’s see how technology might serve you in a variety of ways. The main thing it does is either reduce/eliminate friction or allow for different views of the same information based on need.

The Site Browser

It’s easy enough usually to get a list of site URLs. If they’re running WordPress on reasonably recent vintage we can grab a bunch of info via the API. In the Google Script to Google Sheets scenario, you’d run something like this.

function getData(url, row) {
 
  if (checkResponse(url)){
    var response = UrlFetchApp.fetch(url+'/wp-json/'); // get feed by API
    var json = response.getContentText();   
  }  
  
  if (isJsonString(json)){
    var data = JSON.parse(json); //parse data   
  }
  
  var ss = SpreadsheetApp.getActiveSpreadsheet(); //get various spreadsheet things
  var sheet = ss.getActiveSheet();
  var blogInfo = [];
  
  if (checkName(data)){
    var theName = data.name; //get the site name
    var theDescrip = data.description; //get the description
    var dt = new Date(); //set date
    var utcDate = dt.toUTCString();
    
    blogInfo.push(theName); //push all this into an array for the row
    blogInfo.push(theDescrip);
    blogInfo.push(utcDate);
    blogInfo.push(dt);
  }else{
    var theName = 'error';
  }
    var range = sheet.getRange('E'+ row + ':H' + row).setValues([blogInfo]); //set the data to the cells 
}


//check dates for updates

function checkFresh(date){
  var now = new Date();
  var utcDate = dt.toUTCString();
  if (date && utcDate){
   return true; 
  }
}


//error catchers
function isJsonString(str) {
    try {
        JSON.parse(str);
    } catch (e) {
        return false;
    }
    return true;
}

function checkResponse(url){
  try {
     UrlFetchApp.fetch(url+'/wp-json/');
  } catch (e) {
    return false;
  }
  return true;
}

function checkName(data){
  try {
     data.name;
  } catch (e) {
    return false;
  }
  return true;
}

function today (){
    var today = Date.now();
    today = today.toDateString();
  Logger.log(today);
}

We can then set that stuff to update every X number of days and keep an up to date list of sites with additional information. We could also do other tricks here like checking for the most recent post and doing something to indicate that it only has the ‘Hello World’ post. That’d let you screen out sites that were just trials. You can also take a closer look at some of the stuff John set up here. They check for abandoned sites. I’ve also got a few that look at the XML feeds of Dokuwiki and Mediawiki so there’s the potential to do very similar things across a number of platforms.

Another big hassle was taking screenshots. I’ve always found that to be a pain. It’s certainly a point of friction for anyone doing this at any scale and it’s a hassle to keep the screenshots updated. After a great deal of looking around over the years I’ve found very few solutions and no great solutions. I’m currently using phantomjs to do it. Essentially I can set up a script and then run a cron task against it. The script looks is below and it saves screenshots of the URLs in an array and saves them by the URL in a particular directory.

//from http://j4n.co/blog/batch-rendering-screenshots-with-phantomjs

var URLS =[];

var SCREENSHOT_WIDTH = 1280; 
var SCREENSHOT_HEIGHT = 768; 
var LOAD_WAIT_TIME = 5000; 

var getPageTitle = function(page){
    var documentTitle = page.evaluate(function(){
        return document.title; 
    })
    console.log("getting title:", documentTitle)
    return documentTitle; 
}

var getPageHeight = function(page){
    var documentHeight = page.evaluate(function() { 
        return document.body.offsetHeight; 
    })
    console.log("getting height:", documentHeight)
    return documentHeight; 
}

var renderPage = function(page,index,URLS){

    var title =  URLS[index].substring(7);

    var pageHeight = getPageHeight(page); 

    page.clipRect = {
        top:0,left:0,width: SCREENSHOT_WIDTH, 
        height: SCREENSHOT_HEIGHT
    };
    page.render('/home/bionicte/public_html/gtown/screenshots/'+title+".jpeg" , {format: 'jpeg', quality: '80'});
    console.log("rendered:", title+".png")
}

var exitIfLast = function(index,array){
    console.log(array.length - index-1, "more screenshots to go!")
    console.log("~~~~~~~~~~~~~~")
    if (index == array.length-1){
        console.log("exiting phantomjs")
        phantom.exit();
    }
}

var takeScreenshot = function(element){

    console.log("opening URL:", element)

    var page = require("webpage").create();

    page.viewportSize = {width:SCREENSHOT_WIDTH, height:SCREENSHOT_HEIGHT};

    page.open(element); 

    console.log("waiting for page to load...")

    page.onLoadFinished = function() {
        setTimeout(function(){
            console.log("that's long enough")
            renderPage(page,index,URLS)
            exitIfLast(index,URLS)
            index++; 
            takeScreenshot(URLS[index]);
        },LOAD_WAIT_TIME)
    }
}
var index = 0; 
takeScreenshot(URLS[index]);

Now we can take our Google Sheet and the information there and blend it with the screenshots. In this case, I’m using Vue but it could be done in anything.

<div id="sites" class="container main-content">
<nav class="navbar navbar-default navbar-fixed-top">
      <div class="container">
        <div class="navbar-header">
          <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar">
            <span class="sr-only">Toggle navigation</span>
            <span class="icon-bar"></span>
            <span class="icon-bar"></span>
            <span class="icon-bar"></span>
          </button>
          <a class="navbar-brand" href="https://georgetown.domains/"></a>
        </div>
        <div id="navbar" class="navbar-collapse collapse">
          <ul class="nav navbar-nav">            
          </ul>
          <ul class="nav navbar-nav navbar-right">
            <li><button @click="searchText='cat_a'" >cat a</button></li>
            <li><button @click="searchText='cat_b'" >cat b</button></li>
            <li><button @click="searchText='cat_c'" >cat c</button></li>
            <li><button @click="searchText=''" >reset</button></li>   
            <li><input v-model="searchText"></li>     
            <li></li>
          </ul>
        </div><!--/.nav-collapse -->
      </div>
    </nav>

  <div class="loading">
      <i v-if="!sites" class="fa fa-spinner fa-spin loading"></i>
  </div>
  <div class="row">
    <div v-for="site in filterBy(sites, searchText) " :key="this.sites" class="col-md-4 the-blog item" transition="site" :style="{backgroundColor: randomColor()}">
        <a :href="theLink(site)" target="_blank" class="commit"> 
          <div class="site-info">
            <div class="title" v-html="theTitle(site)" ></div>
            <div class="description" v-html="theDescription(site)">            
            </div>
       </a>     
          </div>  
            <div>
              <img class="img-fluid" :src="getThumbnail(site)" width="100%" height="auto"/>           
            </div>
        <div class="extra-info">
          <a :href="dataLink(site)" target="_blank"><i class="fa fa-user-circle-o" aria-hidden="true"></i></a> // 
          <a :href="timeLink(site)" target="_blank"><i class="fa fa-calendar-o" aria-hidden="true"></i></a>
        </div>
      <!--<div :data-url="theLink(site)" class="load" >
         {{site.gsx$url.$t.substring(7)}}
        
           <button @click="fetchPosts(site);select($event)" :id="theId(site)">Click me</button>      

      </div>    -->
    </div>      
  </div>  
</div>

The Vue end of that HTML template looks like so.

Vue.use(VueLazyload)

var spreadsheetID = "YOUR-SPREADSHEET-ID";
                      
// Make sure it is public or set to Anyone with link can view 
var blogURL = "https://spreadsheets.google.com/feeds/list/" + spreadsheetID + "/1/public/values?alt=json";

var blog = new Vue({
  el: '#sites',
  
  data: {
    sites: null,
    posts: null,
    searchText: null
  },
  
  watch: {
    currentPage: 'fetchSites',
    currentPage: 'fetchPosts',
    currentPage: 'searchBy'
  },

  created: function () {
    this.fetchSites()
  },
  

  methods: {
    fetchSites: function () {
      var xhr = new XMLHttpRequest()
      var self = this
      xhr.open('GET', blogURL  )
      xhr.onload = function () {
            this.loading = true //for the loading icon 
        self.sites = JSON.parse(xhr.responseText)
        self.sites = self.sites.feed.entry  
            this.loading = false;      
      }
      xhr.send()
    },
        
    theTitle: function(site) {
      return site.gsx$title.$t;
    },

    theLink: function(site) {
      return site.gsx$url.$t
    },

    theId: function(site){
      var shortUrl = site.gsx$url.$t.substring(7)
      return shortUrl
    },


    theDescription: function(site){
      return site.gsx$description.$t;
    },

    getThumbnail: function(site){
      var stem = site.gsx$url.$t
      stem = stem.substring(7)
      return 'screenshots/' + stem + '.jpeg'
    },

    fetchPosts: function(post) {
      var postsUrl = post.gsx$url.$t + '/wp-json/wp/v2/posts/?per_page=3'
      console.log(postsUrl)
      var xhr = new XMLHttpRequest()
      xhr.timeout = 2000 // time in milliseconds
      var sitePosts = this
      xhr.open('GET', postsUrl  )
      xhr.onload = function () {
        sitePosts.posts = JSON.parse(xhr.responseText)                
      }
      xhr.send(null)
      alert(sitePosts.posts)
      return sitePosts.posts      
      
    },

    postTitle: function(thePosts){
      return sitePosts.posts[0].title.rendered
    },

    select: function(event){
      targetId = event.currentTarget.id
      console.log('id =' +targetId)
    },

    writePosts: function(id){
      document.getElementById(id).innerHTML = "New text!";
    },

    searchBy: function(cat){
      searchText = cat
      console.log(searchText)      
      return searchText

    },

    randomColor: function(){
      var bgColors = ['#72984b', '#e1261c', '#8a2432', '#b9d9ec', '#f9e08e ']
      var rand = Math.floor(Math.random() * bgColors.length);
      console.log(bgColors[rand])
      return bgColors[rand];
    },

    dataLink: function(site){
      return url = 'http://bionicteaching.com/gtown/gtown.html#'+site.gsx$url.$t;
    },

     timeLink: function(site){
      return url = 'http://bionicteaching.com/gtown/blogtime#'+site.gsx$url.$t.substring(7);
    }

  },
  mounted() {
    var buttons = this.$el.getElementsByClassName('load');
    console.log(buttons);
    for (i = 0; i < buttons.length; i++){
      var url = buttons[i].dataset.url;
      console.log(url);
    }
  } 
});

Turns out there is a whoooole lot of stuff I haven’t written up fully and I’m getting tired so consider this a rough round one for this discussion.


1 As opposed to communal sites, like phpBB, which have different options.

2 Apathy is probably worse than hatred in this scenario.

3 As at least two people have messaged me– even Gangnam Style cannot reign forever.

4 I don’t mean that in a derogatory way. There may be a better word– something that means showing good things to encourage good things

SPLOT-light?

SPLOT-light?

So the other day I posted about how to make silent Google Form submissions. Then this morning I was looking around at headless CMS options1 and saw this one being advertised as being driven by Google Drive/Spreadsheets.

Those two things came together as I mowed my lawn and I wondered if I couldn’t make a little rich text editor to construct a one-piece content creator/displayer using Google Sheets. That led to a little research into URL parameter limits.

And then this evening I made this editor. The page uses Quill to take care of the rich text editor. It turns out there’s a whole world of rich text editor options out there. I’m only scratching the surface with Quill but it works fine for now.

It’s bare bones. You can associate an image via a URL, make a title, and add some text. It does show some interesting possibilities though and all with very little infrastructure or real technical know how.

The image preview is built by this little bit of javascript. It’s based on having a text field with the id of ‘theImage’ and then there’s a check to make sure there’s not already an image attached and if there is it replaces it.

function previewImage(){
    var url = document.getElementById('theImage').value;    
    var img = document.createElement('img');    
    img.src = url;
    var prev = document.getElementById('imgPreview');   
   //check for images already present
    if (prev.innerHTML.length < 0){ 
     	prev.appendChild(img);
	 } else {
		 prev.innerHTML = '<img src="'+url+'">';
	 }
}

This little bit of javascript builds the Google Forms URL from the various text fields on the page. I set it to run again on onblur (when a text field loses focus) and when there are edits to the Quill field. Probably not the most efficient path but it works.

function getData(){
  var title = encodeURI(getTitle());
  var image = getImage();
  var text = encodeURI(document.getElementsByClassName('ql-editor')[0].innerHTML);

  var url = 'https://docs.google.com/forms/d/e/1FAIpQLSc6g67PcQ1-0hhmupOufIoX5tTPio6hwEbIBiLWe5h_k74DCQ/formResponse?usp=pp_url&entry.468287345='+title+'&entry.157183567='+text+'&entry.1663876317='+image+'&submit=Submit';
  document.getElementById('theUrl').dataset.link = url;
  return url;
}

Initially submission would take you to that URL but that wasn’t what we wanted. Turns out fetch seems quite enough to do what we need. If I decide to make this better, I could do more with the confirmation message and do something to scroll down or something like that.

function submitPost(){	
var url = document.getElementById('theUrl').getAttribute('data-link');
console.log(url);
fetch(url).then(function(response) {	
	console.log('yes ' + url)
}).catch(function(err) {
	// Error </div><!-- .entry-content --><footer class=Filed Under: thirdspace

I’m not dead yet . . . Google Scripts to Check on a Bunch of WP Sites

John Stewart is going to write up something more systematic and structured as he’s taken these rough ideas up a notch but I figured I’d throw this functional Google Script code in here before I lost it. I believe I got up to five positive statements on blogging more in-process stuff so I’m taking that as an overwhelming mandate.

These Google Script functions are meant to loop through a set of URLs in a Google Sheet pulled as an array to see if the site’s still in use. The first two take a look at the WP REST API endpoints for posts and pages. That way if the person only writes pages you won’t be tricked. I’d probably write them all to the sheet because I’m paranoid.

The third (aka the hassle as I hadn’t ever messed with XML using javascript) looks at the RSS/XML feed in case the site is not updated enough to have a functional API endpoint or if it’s broken for some reason. This won’t help you out if they’re just writing pages but there’s only so much a person can do.

John made a more robust structure with error checking, the piece where it writes to the spreadsheet etc. and I’ll loop in his post once it’s up.

function checkPosts(url) {
  var api = '/wp-json/wp/v2/posts/';
  var response = UrlFetchApp.fetch(url+api);
  var json = JSON.parse(response.getContentText()); 
  var mostRecentPost = json[0].modified;//note last modified rather than pub date just in case . . . 
  Logger.log(mostRecentPost);
  return mostRecentPost;
}
function checkPages(url) {
  var api = '/wp-json/wp/v2/pages/';
  var response = UrlFetchApp.fetch(url+api);
  var json = JSON.parse(response.getContentText()); 
  var mostRecentPage = json[0].modified;
  Logger.log(mostRecentPage);
  return mostRecentPage;
}
function checkFeed(url) {
 var feed = '/feed/';  
  var xml = UrlFetchApp.fetch(url+feed).getContentText();
  var document = XmlService.parse(xml);
   var response = document.getRootElement();
  var channel = response.getChild('channel');
  var build = channel.getChild('lastBuildDate').getValue();
 return build;
  
}

Silent Submission of Google Forms

If you tuned in about half an hour ago, you’d have seen how we’re triggering channel creation in Slack based on a custom post type getting published. One of the other tricks we wanted to happen as a result of that was the creation of a Google Folder. There are a variety of ways to play this but some of the easier ones would require some options we have blocked on our VCU accounts. I could have gone around that via a personal account and then subsequent sharing but it seemed like it’d be more fun to do it this way.

I knew I could trigger script events based on form submissions and that I could use the data in the form as variables as well. I also knew I could fill out form variables via URL parameters.

What I didn’t know was whether I could submit a Google Form without actually hitting submit. Turns out you can.

Take your normal form URL.
https://docs.google.com/a/vcu.edu/forms/d/e/1FAIpQLScK2wgma6Oicv_ZY9i-6tg_w9RfEKKkgiAFJDw15jJnmr5ofQ/viewform?entry.1431785794

You can get one of the pre-filled URL patterns like so . . .

Which gives you a URL like this. You can see my pre-filled response ‘fish tank’ at the end of the url.
https://docs.google.com/forms/d/e/1FAIpQLScK2wgma6Oicv_ZY9i-6tg_w9RfEKKkgiAFJDw15jJnmr5ofQ/viewform?usp=pp_url&entry.1431785794=fish+tank

Now to make it auto submit ‘fish tank’ you have to change one piece and add an element at the end. I’ll stack them over one another to better show the difference.
https://docs.google.com/forms/d/e/1FAIpQLScK2wgma6Oicv_ZY9i-6tg_w9RfEKKkgiAFJDw15jJnmr5ofQ/viewform?usp=pp_url&entry.1431785794=fish+tank
https://docs.google.com/forms/d/e/1FAIpQLScK2wgma6Oicv_ZY9i-6tg_w9RfEKKkgiAFJDw15jJnmr5ofQ/formResponse?usp=pp_url&entry.1431785794=fish+tank&submit=Submit

Note the move from viewform to formResponse and the appending of &submit=Submit at the end of the URL.

Now I can create a function to call that URL in WordPress every time a Project is published like so and subsequently I can write a Google Script to do what I need to do on that end.

function submitGoogleForm ($id, $post){
	$formUrl = 'https://docs.google.com/a/vcu.edu/forms/d/e/1FAIpQLScK2wgma6Oicv_ZY9i-6tg_w9RfEKKkgiAFJDw15jJnmr5ofQ/formResponse?usp=pp_url&entry.1431785794=';
	$projectName =$post->post_name;
	$submit = '&submit=Submit';
	$fullUrl = $formUrl.$projectName.$submit;

	file_get_contents($fullUrl);
}

add_action( 'publish_project', 'submitGoogleForm', 10, 2);

Contact us

Academic Learning Transformation Lab - ALT Lab
1000 Floyd Ave, Suite 4102 | Richmond, Virginia 23284
altlab@vcu.edu | 804-827-5181

Last updated: April 27, 2016

Virginia Commonwealth University

HTML tutorial