Category Archives: thirdspace

Photography – #163

Photography – #163


I ended up talking to David on the bus from the Medical Campus to Monroe Park. We started off talking a bit about cameras. From there I found out David has six children. Four of those children are adopted. He believes strongly in trying to do good in the world and this is part of that effort. David’s a very devout Christian and referenced God repeatedly in our conversation. He did it in a way that seemed very natural.

We talked a bit about how much children learn from their parents and their actions even when that isn’t the intent. David talked about how he teaches a course and on radiology transportation (his current occupation). He talks about starting the class focusing on making good choices and how, with that foundation, everything else can be learned in time.

David’s brother died about a year ago in a car accident and was an organ donor. He was there when the hospital called his father to confirm the organ donation. His father couldn’t answer the question and gave the phone to David who agreed that all the organs should be donated except the eyes. “Because the eyes are the windows to the soul.”

All in all, a pretty intense and wide-ranging conversation for a short bus ride.

Photography – #163

Photography – #163

Photography – #163

Photography – #163

Photography – #163


I ended up talking to David on the bus from the Medical Campus to Monroe Park. We started off talking a bit about cameras. From there I found out David has six children. Four of those children are adopted. He believes strongly in trying to do good in the world and this is part of that effort. David’s a very devout Christian and referenced God repeatedly in our conversation. He did it in a way that seemed very natural.

We talked a bit about how much children learn from their parents and their actions even when that isn’t the intent. David talked about how he teaches a course and on radiology transportation (his current occupation). He talks about starting the class focusing on making good choices and how, with that foundation, everything else can be learned in time.

David’s brother died about a year ago in a car accident and was an organ donor. He was there when the hospital called his father to confirm the organ donation. His father couldn’t answer the question and gave the phone to David who agreed that all the organs should be donated except the eyes. “Because the eyes are the windows to the soul.”

All in all, a pretty intense and wide-ranging conversation for a short bus ride.

Photography – #163

Photography – #163

Photography – #163

So Many Sites – Cleaning Up Users

So Many Sites – Cleaning Up Users

There are lots of ways users can end up associated with many sites in a WordPress multisite install. That’s no big deal if it’s only five or ten but sometimes it’s way more. It’s not just messy, it actually degrades performance when you’re logged in because the admin menu bar loads all those sites. This can really become a drag as you pass a hundred or so sites. Previously, I’ve just given up on the user and made a new one. I’ve also gone through https://theSite.us/wp-admin/network/users.php and opened up a number of sites and removed the user from each one manually. That’s a pretty awful pattern but being in a hurry leads to all sorts of bad choices.1

Today I got the request to remove around six hundred sites from a particular faculty member. The request coincided with time and mental bandwidth so I opted to do this in an intelligent way. There was also no way I was going to do this by hand.

First step would be to get the user’s id from the wp_users table. You can look up users here by user_login or user_email and get to what you need pretty quickly. If you’re using Sequel Pro rather than the terminal don’t forget to restrain your searches by the right field. For this example, we’ll pretend our user_id is 666.

Now that you have your user ID, you’ll move to the wp_usermeta table. Perform your search for 666 across the user_id field. You should now have just the data associated with that user. Scanning the data should show you a number of entries under the meta_key field that say things like wp_16937_user_level and wp_16937_capabilities. In my case, I found it easiest to select all of these and delete them then re-add the user to the single blog they still wanted. Alternately, you might retain the info related to their primary_blog ID which is listed in this table as well.

If we started seeing this more frequently, I’d build out a PHP-based interaction with this where we could find a user and then manipulate things from there.


1 Proper water bailing form is rarely considered when the boat is barely being kept afloat.

So Many Sites – Cleaning Up Users

So Many Sites – Cleaning Up Users

There are lots of ways users can end up associated with many sites in a WordPress multisite install. That’s no big deal if it’s only five or ten but sometimes it’s way more. It’s not just messy, it actually degrades performance when you’re logged in because the admin menu bar loads all those sites. This can really become a drag as you pass a hundred or so sites. Previously, I’ve just given up on the user and made a new one. I’ve also gone through https://theSite.us/wp-admin/network/users.php and opened up a number of sites and removed the user from each one manually. That’s a pretty awful pattern but being in a hurry leads to all sorts of bad choices.1

Today I got the request to remove around six hundred sites from a particular faculty member. The request coincided with time and mental bandwidth so I opted to do this in an intelligent way. There was also no way I was going to do this by hand.

First step would be to get the user’s id from the wp_users table. You can look up users here by user_login or user_email and get to what you need pretty quickly. If you’re using Sequel Pro rather than the terminal don’t forget to restrain your searches by the right field. For this example, we’ll pretend our user_id is 666.

Now that you have your user ID, you’ll move to the wp_usermeta table. Perform your search for 666 across the user_id field. You should now have just the data associated with that user. Scanning the data should show you a number of entries under the meta_key field that say things like wp_16937_user_level and wp_16937_capabilities. In my case, I found it easiest to select all of these and delete them then re-add the user to the single blog they still wanted. Alternately, you might retain the info related to their primary_blog ID which is listed in this table as well.

If we started seeing this more frequently, I’d build out a PHP-based interaction with this where we could find a user and then manipulate things from there.


1 Proper water bailing form is rarely considered when the boat is barely being kept afloat.

WordPress Timeline JS Plugin

Background

I like Timeline JS. It’s a nice way to create multimedia timelines. I’d previously done some work that would take WordPress JSON API data and insert it into the Timeline JS view.1 It was nice for creating alternate and standardized views of blogs that might be useful for different reasons. It didn’t serve some other needs and while doing it through a generic URL was handy for many reasons it was odd in other scenarios. As a result I decided to make a new version as a plugin. If you don’t like reading stuff there’s a quick video of how it works below.

Plugin Goals

First, I wanted this to be a plugin rather than a theme. That adds a bit of complexity because you don’t have control of the whole scenario but it makes it much more portable and more likely to be used as it doesn’t require people to change themes or spin up an additional site.

I wanted people to be able to use WordPress rather than a spreadsheet to create the content for Timeline JS. Doing that has a few advantages- the WYSIWYG editor, the ability to upload images directly in WordPress, the ability to use posts you’ve already written, etc. etc.

I also wanted people to be able to choose what posts ended up being used in the timeline by choosing a particular category. That would enable them to keep using the blog as normal but also have the ability to pull particular posts into a timeline or create multiple timelines that display only posts from particular categories.

Featured images would be used to create the main image display in the timeline. I wanted to support both start and end times for events and I wanted a fairly intuitive way to set the starting slide/element on the timeline.

How that Played Out

I made a custom post type called Timelines. I figured I’d use that as the authoring element for the timelines. It solved a few problems for me without requiring a shortcode with tons of variables.2

In this case the title and body of the Timeline custom post type becomes the content of the title element of the timeline. I tied the categories to the categories used in normal posts and that enables you to choose however many categories you’d like for inclusion. I went with ‘category__in’ for this but will likely add another metabox to enable you to choose additional category inclusion/exclusion options. I’ll probably also add tags if I see any interest from people.

Technical Stuff

I’m probably almost certainly not doing this the way WordPress would like. I believe they want me to use wp_localize_script to integrate the variables rather than writing the script into the post body. When I started that seemed more difficult and so I went the way I knew.

Step 1 – Load the Timeline JS & CSS

You only want this happening on the timeline posts so adding the if (get_post_type($post->ID) === ‘timeline’) makes that happen. This is one of those things I ignored initially and then came back and fixed when I started writing this post. Writing blogs posts is nice because it documents things and makes me notice all sorts of things I missed in the heat of trying to get a working plugin but it also sucks because it takes me forever to write the post. These asides are also the reason I have 223 draft posts on my site.

if(!function_exists('load_timelinejs_script')){
		    function load_timelinejs_script() {		    	
		        global $post;
		        if (get_post_type($post->ID) === 'timeline'){
			        $deps = array('jquery');
			        $version= '1.0'; 
			        $in_footer = false;
			        wp_enqueue_script('knightlab_timeline', plugins_url( '/js/timeline-min.js', __FILE__), $deps, $version, $in_footer);			
			    }
			}
		}
		add_action('wp_enqueue_scripts', 'load_timelinejs_script');

		function add_timeline_stylesheet() {
			global $post;
			if (get_post_type($post->ID) === 'timeline'){
		    	wp_enqueue_style( 'timeline-css', plugins_url( '/css/timeline.css', __FILE__ ) );
		    }
		}

		add_action('wp_enqueue_scripts', 'add_timeline_stylesheet');
	

Step 2 – Make the Data

A huge chunk of programming is just figuring out ways to generate patterns. Timeline JS has done all the work of building a framework that accepts a certain pattern of data I just need to make WordPress generate it.

A Custom Post Type with a Custom Template

Since I’d opted to go the custom post type route, I needed to create it and make a particular template for it. I hadn’t done that through a plugin before but the codex came through for me. You might also note that I stick references to where I got things in the code as comments. It helps me when I write these posts, gives additional credit, and builds a useful associative trail that helps me if something breaks and might help others who want to see other related elements.

//FROM https://codex.wordpress.org/Plugin_API/Filter_Reference/single_template
/* Filter the single_template with our custom function*/
function get_custom_post_type_template($single_template) {
     global $post;

     if ($post->post_type == 'timeline') {
          $single_template = dirname( __FILE__ ) . '/timeline.php';
     }
     return $single_template;
}
add_filter( 'single_template', 'get_custom_post_type_template' );

The template is bare bones. It’s built off the basic structure of the 2017 theme. You can see the way the script integrates local variables from the timeline post into the JSON structure . . . and the more I look at it, the more I realize localizing was the way to go. Live/learn. It’ll do for now.

<?php get_header(); ?>	

    <body>  
    	<div class="timeline">
			<div class="content-area">
				<main id="main" class="site-main" role="main">
       				 <div id="timeline-embed"></div>
				        <?php if(have_posts()): while(have_posts()): the_post(); ?>
				           <?php 
				          	 $post_id = get_the_ID();
				          	 $content = json_encode(get_the_content($post_id));          	
				          	 ?>	          	 		
						<?php endwhile; endif;?>

				         <script type="text/javascript">
							
						 var the_json =  {
							"title": {
							"media": {
							"url": "<?php echo $featured_img_url = get_the_post_thumbnail_url( $post_id, 'full'); ?>",
							"caption": "",
							"credit": ""
							},
							"text": {
							"headline": "<?php echo get_the_title($post_id);?>",
							"text": <?php echo $content;?>
							}
							},
							"events": 
							<?php echo makeTheEvents ($post_id);?>
							};

						    window.timeline = new TL.Timeline('timeline-embed', the_json);     
						</script>

				</main><!-- #main -->
			</div><!-- .content-area -->
		</div><!-- .timeline -->
	</body>
<?php get_footer();

Event Data

The following function creates the event JSON data using the query loop. I limited it to 40 events.

I’m only sort of using the Event class properly. I got warnings when trying to add elements to the main structures (that’s why you see the @ prepended – @$event->media->url). The object oriented side of things is something I know basically nothing about. It makes it harder to Google things without the right vocabulary so some actual structured attempts to learn this is on the near horizon.


class Event {
    public $media = "";
    public $start_date = "";
    public $text = "";
    
}

function makeTheEvents ($post_id){
	        $cats = wp_get_post_categories($post_id); 

	        //if custom field type is set to a custom post type then get that instead
	        if (get_post_meta($post_id, 'type', true )){
	        	$post = get_post_meta( $post_id, 'type', true );
	        } else {
	        	$post = 'post';
	        }

			$args = array(
				'posts_per_page' => 40, 
				'orderby' => 'date',
				'category__in' =>  $cats,
				'post_type' => $post,
			);
			$the_query = new WP_Query( $args );
			// The Loop
			$the_events = array();

			if ( $the_query->have_posts() ) :				
			while ( $the_query->have_posts() ) : $the_query->the_post();
				$the_id = get_the_ID();
				//get the featured image to use as media
				if (get_the_post_thumbnail_url( $the_id, 'full')){
					$featured_img_url = get_the_post_thumbnail_url( $the_id, 'full');
					$thumbnail_id = get_post_thumbnail_id( $the_id);
					$alt = get_post_meta($thumbnail_id, '_wp_attachment_image_alt', true); 
					$caption = get_post($thumbnail_id)->post_excerpt;
				} else $featured_img_url = "";

				$event = new Event();
				//MEDIA
				@$event->media->url = $featured_img_url;
				@$event->media->caption = $alt;
				@$event->media->credit = $caption;
				//DATE
				@$event->start_date->month = get_the_date(n);
				@$event->start_date->day =  get_the_date(j);
				@$event->start_date->year =  get_the_date(Y);
				//TEXT
				@$event->text->headline = get_the_title();
				@$event->text->text = get_the_content();
				//END DATE
				if (get_post_meta($the_id, 'end_date', true) && get_post_meta($the_id, 'end_date', true)["text"]){
					$end_date = get_post_meta($the_id, 'end_date', true)["text"];
					@$event->end_date->month = intval(substr($end_date, 5, 2));
					@$event->end_date->day =  intval(substr($end_date, -2));
					@$event->end_date->year =  intval(substr($end_date,0, 4));
				}
			    array_push($the_events, $event);
				endwhile;
			endif;
			// Reset Post Data
			wp_reset_postdata();
			$the_events = json_encode($the_events);
			return $the_events;
}


1 A blog post on how that works is via Google Sheets is here. I also built another version using URL parameters and PHP but may not have written the blog post. In any case, a working example of that is here.

2 And the looming specter of Gutenberg obsolescence.

Weekly Web Harvest for 2018-02-11

  • Porsche Classic supplies classic parts from a 3D printer

    Due to the consistently positive results received to date, Porsche is currently manufacturing eight other parts using 3D printing. The parts in question are steel and alloy parts produced using the selective laser melting process, and plastic components manufactured using an SLS printer

  • HUBzero – Home

    Host analytical tools, publish data, share resources, collaborate and build communities in a single web-based ecosystem.

Screen Time

Screen Time

Every time I hear something about limiting screen time I cannot help but think about how poorly the concept has been thought out.

If we talked about “food time” instead maybe that would help us think that while time matters (eating for hours each day is probably a bad idea), how long you eat probably matters far less than what you’re eating. You have to think about both things. Funneling cheetos for 30 minutes a day is worse than eating carrots for an hour.1

Screen time isn’t a single thing. It’s an insane range of things. There’s lots of screen time that is of Twinkie quality but there are many other options. If I read a book on a device is it screen time or is that reading? If I’m coding for an hour? Editing video? Video chat with my parents? When we reduce things to this extent we end up doing things that ignore the actual problem.

So the next time someone on the radio or TV talks about screen time as if it were a single thing please join me in envisioning the giant cartoon heads depicted below.

Screen Time

Screen Time

Screen Time


1 Funneling cheetos may not even qualify as actually eating. It’s a chemical endurance sport that will likely be featured in the next Olympic games

Using AWS for Data Analysis

Using AWS for Data Analysis

I’m not really sure when this happened, but over the last several years, I’ve started to spend a lot of my personal and professional time working on building out data visualization tools and workflows. In a recent meeting, someone referred to us as data scientists, and we’ve had a good running joke ever since. While I appreciate the nod, I’m not sure I’m ready to self-refer to myself as that. As most self-taught people in tech know, it takes awhile to really feel comfortable calling yourself a title that you don’t really think you deserve, having stumbled backwards or perhaps half-blind into your present skill set.

Either way, as someone good with data stuff (SQL, web dev, dashboards, data viz, and Python), I’m increasingly being asked to provide advice as some sort of subject matter expert on the tools in the modern data toolkit.

While this is a lot of fun, it really starts to make me think about my own practices, which are admittedly ever evolving and self-serving. On a recent project, I was asked to advise a professor in the Information Systems department teaching a Business Intelligence class for the Online MBA at VCU. He has a pretty ambitious goal: get MBA students to conduct basic BI/data analysis tasks using tools available in AWS.

Below are just a few of my thoughts on using some of the tools available in AWS.

AWS Athena: Who Needs a Stinking Database

One of the first tools I started looking at was AWS Athena. This is a very cool service that simplifies a lot of things for anyone looking to work with data. In most cases, if you have more than one spreadsheet of data to analyze, or want to do some advanced querying, you will want to load you data into one or more SQL tables using some SQL variant.

Basically, there are a few ways to get data from one source, say a CSV, into a SQL database. Some of the ways I’ve done this in the past involve either loading the CSV files using a database management tool, or script the same thing with Python. As an example, for my IPEDS data analysis, I spent tons of upfront time importing CSV data to create my initial SQL tables. Some of these tables have 200K+ rows, so as you can imagine, that process took some time and was not without error.

While this was clearly the best path, CSV -> SQL, it took an unnecessarily long time considering all I wanted to do was run few queries, then export the resultant data as JSON for use elsewhere. This is where Athena comes in handy.Using AWS for Data Analysis

Athena allows you to upload your data to S3, then create ‘virtual’ databases and tables from that structured data (CSV, TXT, JSON). From there you can use an interface, or I’m assuming an API as well, to run queries against that data directly from S3.

Full blown databases are great at providing read/write access, while also giving you access to more advanced querying tools. But if you are working with a static dataset like I am in the IPEDS project, I’m not really reaping any of the benefits of the database, aside from the SQL querying.

At the same time, AWS’ provisioned database service is really expensive, like almost 2X the cost of a virtual server from the same specs. And the cost of those is calculated in hours, not just the time spent querying or writing data. Athena on the other hand, only charges you for the queries you run, which would make it a more cost effective choice for organizations that have large amounts of data.

Needless to say, I’m going to shift some of my own projects over to using this technology, especially when I’m sure I won’t need to continue to write data to tables. For just loading and transforming the data, it seems like Athena is the way to go.

AWS Quicksight: Why Do I Even Program?

Over the last few years, I’ve built out a number of dashboards manually using some variant of D3 or other charting libraries. However, the more time I spend with data folks, the more and more I question whether or not ‘bespoke’ dashboards written in JavaScript are the way to go.

Technology has a funny way of making hard things trivial over time, and I think we’re about there with modern “business intelligence” tools like Tableau, Power BI, and Quicksight.

Sticking with my theme of AWS for most things, I decided to give Quicksight a try. Since I just wanted to build out some basic examples, I downloaded a quick CSV file of active real estate listings in Richmond, VA from Redfin.

Real estate tends to be a nice domain for test data sets since there are tons of variables to look at, the data sources are plentiful, and it is easy to understand.

Overall, my experience using Quicksight to build out a dashboard was pretty excellent, and I was able to use a nice drag and drop interface to design my visuals.

Using AWS for Data Analysis

At a high level, Quicksight handles data like a boss. It does a great job of inferring the types of data from your source. Without coercion, it picked up the geospatial components of my data set, along with the other numerical and textual data.

Using AWS for Data Analysis

Again, it impressed me with the breadth of available visual models, and the ease with which I could construct them. For most of the visuals, such as this heat map, I’ve built an equivalent chart type somewhere in JavaScript. While I was a bit disappointed in my ability to customize the display of the different charts, I was impressed with how easy there were to create.

Using AWS for Data Analysis

It seems like Quicksight, and I’m sure to a greater extent Tableau, are trying to strike a balance between ease of use and customization. It appears there is only so much I can do to make things look better, but there is only so much visual harm I can do as well.

In the end, I really liked using Quicksight, and it made me take a second to question when a tool like this is a better choice over some sort of web dashboard. However, Quicksight is built around the idea of an ‘account’ with users, and does not appear to have an easy way to publish these visuals to the public web, which for my work is a huge downside. I think this is where Tableau might have an edge with their public gallery or paid hosting.

 

The post Using AWS for Data Analysis appeared first on Jeff Everhart.

Using AWS for Data Analysis

Using AWS for Data Analysis

I’m not really sure when this happened, but over the last several years, I’ve started to spend a lot of my personal and professional time working on building out data visualization tools and workflows. In a recent meeting, someone referred to us as data scientists, and we’ve had a good running joke ever since. While I appreciate the nod, I’m not sure I’m ready to self-refer to myself as that. As most self-taught people in tech know, it takes awhile to really feel comfortable calling yourself a title that you don’t really think you deserve, having stumbled backwards or perhaps half-blind into your present skill set.

Either way, as someone good with data stuff (SQL, web dev, dashboards, data viz, and Python), I’m increasingly being asked to provide advice as some sort of subject matter expert on the tools in the modern data toolkit.

While this is a lot of fun, it really starts to make me think about my own practices, which are admittedly ever evolving and self-serving. On a recent project, I was asked to advise a professor in the Information Systems department teaching a Business Intelligence class for the Online MBA at VCU. He has a pretty ambitious goal: get MBA students to conduct basic BI/data analysis tasks using tools available in AWS.

Below are just a few of my thoughts on using some of the tools available in AWS.

AWS Athena: Who Needs a Stinking Database

One of the first tools I started looking at was AWS Athena. This is a very cool service that simplifies a lot of things for anyone looking to work with data. In most cases, if you have more than one spreadsheet of data to analyze, or want to do some advanced querying, you will want to load you data into one or more SQL tables using some SQL variant.

Basically, there are a few ways to get data from one source, say a CSV, into a SQL database. Some of the ways I’ve done this in the past involve either loading the CSV files using a database management tool, or script the same thing with Python. As an example, for my IPEDS data analysis, I spent tons of upfront time importing CSV data to create my initial SQL tables. Some of these tables have 200K+ rows, so as you can imagine, that process took some time and was not without error.

While this was clearly the best path, CSV -> SQL, it took an unnecessarily long time considering all I wanted to do was run few queries, then export the resultant data as JSON for use elsewhere. This is where Athena comes in handy.Using AWS for Data Analysis

Athena allows you to upload your data to S3, then create ‘virtual’ databases and tables from that structured data (CSV, TXT, JSON). From there you can use an interface, or I’m assuming an API as well, to run queries against that data directly from S3.

Full blown databases are great at providing read/write access, while also giving you access to more advanced querying tools. But if you are working with a static dataset like I am in the IPEDS project, I’m not really reaping any of the benefits of the database, aside from the SQL querying.

At the same time, AWS’ provisioned database service is really expensive, like almost 2X the cost of a virtual server from the same specs. And the cost of those is calculated in hours, not just the time spent querying or writing data. Athena on the other hand, only charges you for the queries you run, which would make it a more cost effective choice for organizations that have large amounts of data.

Needless to say, I’m going to shift some of my own projects over to using this technology, especially when I’m sure I won’t need to continue to write data to tables. For just loading and transforming the data, it seems like Athena is the way to go.

AWS Quicksight: Why Do I Even Program?

Over the last few years, I’ve built out a number of dashboards manually using some variant of D3 or other charting libraries. However, the more time I spend with data folks, the more and more I question whether or not ‘bespoke’ dashboards written in JavaScript are the way to go.

Technology has a funny way of making hard things trivial over time, and I think we’re about there with modern “business intelligence” tools like Tableau, Power BI, and Quicksight.

Sticking with my theme of AWS for most things, I decided to give Quicksight a try. Since I just wanted to build out some basic examples, I downloaded a quick CSV file of active real estate listings in Richmond, VA from Redfin.

Real estate tends to be a nice domain for test data sets since there are tons of variables to look at, the data sources are plentiful, and it is easy to understand.

Overall, my experience using Quicksight to build out a dashboard was pretty excellent, and I was able to use a nice drag and drop interface to design my visuals.

Using AWS for Data Analysis

At a high level, Quicksight handles data like a boss. It does a great job of inferring the types of data from your source. Without coercion, it picked up the geospatial components of my data set, along with the other numerical and textual data.

Using AWS for Data Analysis

Again, it impressed me with the breadth of available visual models, and the ease with which I could construct them. For most of the visuals, such as this heat map, I’ve built an equivalent chart type somewhere in JavaScript. While I was a bit disappointed in my ability to customize the display of the different charts, I was impressed with how easy there were to create.

Using AWS for Data Analysis

It seems like Quicksight, and I’m sure to a greater extent Tableau, are trying to strike a balance between ease of use and customization. It appears there is only so much I can do to make things look better, but there is only so much visual harm I can do as well.

In the end, I really liked using Quicksight, and it made me take a second to question when a tool like this is a better choice over some sort of web dashboard. However, Quicksight is built around the idea of an ‘account’ with users, and does not appear to have an easy way to publish these visuals to the public web, which for my work is a huge downside. I think this is where Tableau might have an edge with their public gallery or paid hosting.

 

The post Using AWS for Data Analysis appeared first on Jeff Everhart.

Weekly Web Harvest for 2018-02-04

  • Bildung – Wikipedia

    The term Bildung also corresponds to the Humboldtian model of higher education from the work of Prussian philosopher and educational administrator Wilhelm von Humboldt (1767–1835). Thus, in this context, the concept of education becomes a lifelong process of human development, rather than mere training in gaining certain external knowledge or skills. Such training in skills is known by the German words Erziehung, and Ausbildung. Bildung in contrast is seen as a process wherein an individual’s spiritual and cultural sensibilities as well as life, personal and social skills are in process of continual expansion and growth.

  • Apple HomePod review: locked in – The Verge

    When you set down a HomePod and play music, it goes through a number of steps to tune itself. First, it tries to create a model of the room it’s in by detecting the sounds reflecting off walls. It does this in two passes: the first pass builds a model to a high degree of initial confidence, and the second pass refines the model. This happens faster if you’re playing music with a lot of bass.

  • Episode No. 113: What’s Going On in This Graph? – Policy Viz

    Michael Gonchar and Sharon Hessney lead a new project at the New York Times called “What’s Going On in This Graph?” (WGOITG). Every second Tuesday of every month, the NYT publishes a graphic on a topic suitable for subjects across the middle school and high school curricula. They might remove some key information, such as titles, labels, and annotation, and then ask the students three questions:

    • What do you notice?
    • What do you wonder?
    • What’s going on in this graph?

  • This Mutant Crayfish Clones Itself, and It’s Taking Over Europe – The New York Times

    Before about 25 years ago, the species simply did not exist. A single drastic mutation in a single crayfish produced the marbled crayfish in an instant.

    Continue reading the main story

    Matter
    Matter. It’s the stuff of everything — large and small.
    The Famine Ended 70 Years Ago, but Dutch Genes Still Bear Scars
    JAN 31
    You Are Shaped by the Genes You Inherit. And Maybe by Those You Don’t.
    JAN 25
    In the Arctic, More Rain May Mean Fewer Musk Oxen
    JAN 18
    Climate Change Is Altering Lakes and Streams, Study Suggests
    JAN 11
    In the Bones of a Buried Child, Signs of a Massive Human Migration to the Americas
    JAN 3
    See More »

    The mutation made it possible for the creature to clone itself, and now it has spread across much of Europe and gained a toehold on other continents.

  • fulldecent/system-bus-radio: Transmits AM radio on computers without radio transmitting hardware.

    Transmits AM radio on computers without radio transmitting hardware.

  • The Anatomy of a Data Story

    Knaflic explains that “it’s not the graph that makes the data interesting. Rather, it’s the story you build around it—the way you make it something your audience cares about, something that resonates with them—that’s what makes data interesting.”

Contact us

Academic Learning Transformation Lab - ALT Lab
1000 Floyd Ave, Suite 4102 | Richmond, Virginia 23284
altlab@vcu.edu | 804-827-5181

Last updated: September 26, 2017

Virginia Commonwealth University