Jacob Filipp

Category: Writing

Sync Outlook Calendar to Google Calendar using Microsoft Power Automate

Summary: getting Outlook events to show up in Google Calendar can be a challenge due to your workplace’s security settings. If you have access to Microsoft Power Automate (aka Flow), then you can use this .zip package to set up an integration.

This Flow is based on this Outlook to Google Calendar synch package by Alex Matulich. Thank you for laying the groundwork, Alex!

Continue reading

Tool – convert HTML list to Plaintext

This tool will convert a list of HTML elements like a dropdown list of <option> elements, or <li> and <ol>, into plaintext. One item per line. This helps when you’re quickly copying a list of dropdown items (using Developer Tools) into Excel.

Limitation: the current versiont works when each element is on its own line. This isn’t necessarily always going to be the case with valid HTML, though.

How to make iFrames auto-resize to 100% height based on content

Do you need to ensure your iframe’s height is always set to 100% of the height of the content you are embedding? If so, keep on reading.

This page contains freely-usable code for responsive iFrames. iFrame height will adjust based on the height of the content in them. The code works for cross-domain iFrames and does not use any libraries like jQuery.

The problem

As a Marketing Operations professional, I often need to put marketing forms onto webpages. Normally, I would put code like this onto the page so the form appears in an iFrame:

 <iframe src="https://mydomain.com/the-form.html" style="width:1px; min-width:100%; border-width: 0px;" allowtransparency="true"></iframe> 

The problem with iFrames is that you don’t always know their full height. Also, their height can change unexpectedly.

When that happens, an unattractive scrollbar gets added next to the content – and the content no longer looks like it is a natural part of the parent webpage.

Below is an example of a Pardot marketing form that’s been embedded using an iFrame. Note what happens when the form is submitted with missing information: several error messages appear and add a distracting scrollbar at the right side.

This GIF shows an iFrame bar appear after the embedded form’s content gets taller

What we want is for the form’s iFrame to grow vertically so that all the content shows up. No scrollbar.

Note: The animation of the Pardot form above is using a form from a post by Alex Avendano at EBQ, on customizing the design of Pardot forms. That post contains some great sample code for making your forms look good.

The solution

I have prepared some JavaScript code that solves the problem. It monitors the iFrame for changes in height, and then uses the Window​.post​Message() mechanism to instruct the parent page to resize the iFrame. This works even when the 2 pages are on completely different domains (which is usually a big challenge).

You are free to use this code for any purpose, under the MIT License.

Using the code on your site

Copy the following code onto the <body> of the parent hosting the iFrame:

MIT License

Copyright (c) 2019 Jacob Filipp

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.


// Add this script to the <b>parent page on which your iFrame is embedded</b>
// This code resizes the iFrame's height in response to a postMessage from the child iFrame

// event.data - the object that the iframe sent us
// event.origin - the URL from which the message came
// event.source - a reference to the 'window' object that sent the message
function gotResizeMessage(event)
	console.log( "got resize message: " + JSON.stringify(event.data))
	var matches = document.querySelectorAll('iframe'); // iterate through all iFrames on page
	for (i=0; i<matches.length; i++)
		if( matches[i].contentWindow == event.source ) // found the iFrame that sent us a message
			console.log("found iframe that sent a message: " + matches[i].src)
			//matches[i].width = Number( event.data.width )	 <--- we do not do anything with the page width for now
			matches[i].height = Number( event.data.height )
			return 1;
document.addEventListener("DOMContentLoaded", function(){
	window.addEventListener("message", gotResizeMessage, false)
}); //on DOM ready

Copy this second script into the <body> of the page that will be contained in the iFrame:


MIT License

Copyright (c) 2019 Jacob Filipp

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.


// Add this script into the page that will appear <b>inside an iFrame</b>
// This code monitors the page for changes in size. When change is detected, it sends send the latest size to the parent page using postMessage

// determine height of content on this page
function getMyHeight()
	return Math.max( document.body.scrollHeight, document.documentElement.scrollHeight)

// send the latest page dimensions to the parent page on which this iframe is embedded
function sendDimensionsToParent()
	var iframeDimensions_New = {
				'width': window.innerWidth, //supported from IE9 onwards
				'height': getMyHeight()
	if( (iframeDimensions_New.width != iframeDimensions_Old.width) || (iframeDimensions_New.height != iframeDimensions_Old.height) )  // if old width is not equal new width, or old height is not equal new height, then...
		window.parent.postMessage(iframeDimensions_New, "*");
		iframeDimensions_Old = iframeDimensions_New;

// on load - send the page dimensions. (we do this on load because then all images have loaded...)
window.addEventListener( 'load', function(){
	iframeDimensions_Old = {
						'width': window.innerWidth, //supported from IE9 onwards
						'height': getMyHeight()

	window.parent.postMessage(iframeDimensions_Old, "*"); //send our dimensions once, initially - so the iFrame is initialized to the correct size

	if( window.MutationObserver ) // if mutationobserver is supported by this browser

		var observer = new MutationObserver(sendDimensionsToParent);
		config = {
			  attributes: true,
			  attributeOldValue: false,
			  characterData: true,
			  characterDataOldValue: false,
			  childList: true,
			  subtree: true

		observer.observe(document.body, config);
	else // if mutationobserver is NOT supported
		//check for changes on a timed interval, every 1/3 of a second
		window.setInterval(sendDimensionsToParent, 300);

});   // end of window.onload


Alternatively, you can save this file to your server and reference is using <script src="yourpath/parentcode.js"></script> from the page hosting the iFrame.

You would then download this file to your server and add it to the page contained in the iFrame using <script src="yourpath/iframecode.js"></script> .

Improvements + feedback

You can improve this code for everyone if you send me your feedback. If you run into a bug, or successfully modify the code to solve a bug, then please share this information with me. Either comment on this post or email me at “j@thisdomain” .

Implementing the code for Pardot forms

The iFrame-resizing issue is a special pain for Pardot marketing forms – when you are looking to embed a form onto a page on your main website (which means that you have a cross-domain iFrame).

Here are some concrete steps for implementing the code above in Pardot:

  1. Click on the “Marketing” navigation item
  2. “Forms”
  3. “Layout Templates”

This will let you alter the code that appears with every Pardot form that will be contained inside an iFrame

Go to the template you use most often to render your forms. In our system, we use the default template called “Standard”. Click on it.

Next, click “Edit” on the layout template.
When you are editing, open the Form tab to edit code that only has to do with forms (other layout code tabs are for other things, like landing pages). Finally, copy the Javascript code for the page that will be contained inside an iFrame and paste it where the arrow below indicates (right before the <form> opening tag).

Finally, add the “parent hosting the iFrame” code onto the page on your site where you’ll embed the Pardot form. You can do this by linking the “parentcode.js” file to the page.

Together, these 2 pieces of code are going to ensure that your marketing forms look like they belong on your page – without any scrollbars.

A more advanced solution

When researching this post, I came across a much more robust package for dynamic iFrame resizing called “iFrame Resizer” (and accessible at https://github.com/davidjbradshaw/iframe-resizer) . The creator is David Bradshaw. David’s package detects many more “resize” events than my code – events such as a mobile device being rotated, or a CSS animation affecting the size of the iFrame.

If you need a more heavy-duty solution to the iFrame dynamic height problem then give the “iFrame Resizer” a try.

How we did it: Top Tasks analysis on the LexisNexis website


Have you ever wondered if your website is truly helping your visitors accomplish their goals?

It is usually challenging to find out. Often, the opinions of your coworkers, managers, and complaining customers are the ones you hear the loudest. It is difficult to find out what the average site user thinks.

One way to find out is through a “Top Tasks” survey. This approach has been used by governments, health services, universities and software companies like Microsoft. At the core, this is a survey that reveals user’s basic needs by overwhelming their “thinking brain” with choice. The purpose of this document is to help you run your own Top Tasks survey. You will learn how to set up such a survey through a step-by-step case study of how I ran such a survey at LexisNexis Canada (a company that makes software for lawyers). You will discover surprising real-world insights that go beyond other published materials on Top Tasks.

Why is a Top Tasks survey special?

This type of survey is used to discover which end goals your website visitors value the most. It goes beyond “what information do visitors want?” towards answering “what action can I help them accomplish?”.

There is a twist: in this survey, the visitor is shown about 100 different tasks and urged to choose the top 5 quickly – in this way, we get a “gut reaction” choice rather than one that they had consciously thought about.

Here is an example of a Top Tasks survey question, with a few choices:

This methodology was created by user experience consultant Gerry McGovern, and explained in his 2010 book The Stranger’s Long Neck. It is essentially a sped-up version of a common technique called Card Sorting. The importance of focusing on Top Tasks was also covered in a popular A List Apart article in 2015.

The benefits of a Top Tasks survey are:

  1. It helps movee your organization’s thinking away from “creating content” towards “facilitating a user’s task”. After all, reading information on a website is not what your visitors are there to do – it is just a step towards their bigger goal.
  2. More objective than other ways of determining visitor priorities. These other ways include HiPPO (the Highest Paid Person’s Opinion), interpreting web analytics and running focus groups.
  3. Top Tasks Surveys are more affordable than focus groups when surveying large numbers of people. They can be run online and require less overhead.

Can’t web analytics software tell me what my visitors want?

No. They type of content we already have on the site biases our Analytics data. What if your visitors are looking for information that is missing from your site? There is no “top 10 missing pages” report that would show you that.

Web analytics data also can’t tell us if a person succeeded in performing a task. For example, let’s say that your company sells a variety of software products. When a visitor lands on your site and views 10 different product pages – is that a good thing? Did they come to your site to get an overview of what your range of offering is? (in that case, mission accomplished!), or did they come to read more about a specific product, couldn’t find it, and had to browse around while hoping to stumble on it? (in that case, it’s a bad thing that they had to view so many extra pages). Analytics can’t answer this type of question. You must ask the visitor directly

Who else uses Top Tasks surveys?

Here are organizations that use Top Tasks surveys. Click the links to learn more about their specific experience with the methodology.

As you can see, these surveys are popular with big organizations. They are most useful to organizations that have many document, of different types, served to people with a variety of needs. (Contrast this with a company that makes 1 product which caters to 1 type of customer – they can learn the same lessons by just having 5 “Jobs To Be Done” phone calls with clients )

Creating your own survey: an overview

Do you think that your organization could benefit from a Top Tasks survey

Creating your own survey involves the following steps

  1. Planning
    Determine which team members to involve; how comprehensive to make the survey and your approach to recruiting participants.

  2. Preparing a list of Top Tasks
    Brainstorming a large list of tasks and reducing them to a “short list” of 50 to 100.

  3. Setting up the survey
    Creating a plan for recruiting participants, setting up your survey software and deciding on any participant perks that are needed.

  4. Running the survey
    Observing survey responses as they come in, learning and adjusting for unexpected challenges.

  5. Analyzing the survey and applying the insights
    Turning the raw data into insights. Turning the insights into action.

Let’s see how the team at LexisNexis Canada went through these steps, and what lessons we learned along the way.

The LexisNexis challenge

LexisNexis Canada is a company that makes software for lawyers. There are many software packages on offer, and several types of lawyers that we sell to.

To meet everyone’s needs, our site had to become quite complex.

It didn’t help that the site navigation was organized around products – so visitors needed to know what product they are looking for, in advance. It was difficult to discover new products if visitors hadn’t heard of them already.

In 2017 we decided to discover the main things our website visitors wanted to accomplish. This was part of a big navigation and content redesign initiative. The redesign involved over 20 meetings with internal stakeholders, where we determined the top priorities for the website. The Top Tasks data was going to be the voice of the customer in these discussions – preventing us from focusing just on our own corporate needs.

Personally, I have wanted to run a Top Tasks survey for about 4 years. This redesign project was the first big opportunity I’ve seen to reap benefits from this kind of survey.

In the real world, you probably won’t run a Top Tasks exploration project on its own “just to improve the customer experience”.

It will be a tool you use to solve bigger problem that your organization is tackling. Here are a few types of projects that would benefit from this kind of exploration:

  • Reducing the cost of customer support by making the website more “self-serve”
  • Ensuring that time and money is being directed towards only the highest-value types of content
  • An initiative to improve customer experience (usually will come from “customer experience” becoming a new top priority for management)
  • A requirement to increase conversion rates on the site (for generating leads and sales)

For these kinds of projects, you will need to convince others that the Top Tasks methodology is powerful and appropriate. Establish the reliability of this method by referring to Gerry McGovern’s blog, book and examples of the EU Commission and other organizations’ adoption of it. Make the point that this approach isn’t risky – it’s been tried by many others.

Step 1: Planning

After getting approval to start, plan how you will tackle the project. Don’t rush into execution right away.

Whose Top Tasks do you care about?

Decide whose Top Tasks you’re going to focus on. For LexisNexis, we decided to focus on capturing the opinions of website visitors (because it was easy to do with the tools we had and represented a large portion of our actual stakeholders).

Another company could have chosen to focus on prospective customers. This might’ve required setting up pre-survey question to filter out existing customers, or to travel to an industry conference to survey a general group of potential customers in person.

For example, the University of Glasgow decided to discover 2 sets of Top Tasks, one for students and one for faculty.

How will you determine success?

Think about the value that your Top Tasks exploration will deliver… and it can’t be “a Top Tasks analysis report”!

For my project, good value-for-money looked like this:

  1. The production of a printed Top Tasks report. It would be distributed widely, so that these insights will not be lost on a shared network drive somewhere.
  2. The inclusion of the Top Tasks into every site-redesign discussion. This was going to be a solid document that I would use to support my arguments in favour of customer desires, whenever they clashed with the organization’s needs.
  3. The creation of the document you’re reading right now, to provide value to others outside of my organization. (This meant that I had to keep careful records and take plenty of screenshots as I ran the project).

Note something important that LexisNexis left out: we never did a “Before and after” assessment of how easy it is for people to perform key tasks on our website. This kind of benchmarking shows you whether the changes you made actually led to the desired improvement. Gerry McGovern uses a Task Performance Indicator score to do this assessment, but there are other established ways of doing this (Like this method from Jakob Nielsen). For this project, benchmarking usability improvements was too costly and time consuming.

How will you capture insights?

At LexisNexis, we decided to recruit website visitors to our survey by showing them a popup invitation (using the tool GetSiteControl). To incentivize visitors to take the survey, we offered a $5 credit to a popular coffee chain. The survey itself was going to be run through the Qualtrics tool.

To ensure a good cross section of visitors was represented, we had a pre-survey question about the type of organization that a visitor belongs to. The number of responses from each type of organization was limited. This helped us get a good cross-section of responders.


Because the Top Tasks survey was part of a bigger project, we decided to run it only for as long as it took to get about 50 responses in each of our major customer categories.

50 responses were deemed enough to give us a general idea of these visitors’ needs

Running a “pilot”

I recommend running a small-scale “pilot survey” whenever you’re giving away monetary incentives to survey participants. That’s because a pilot helps reveal errors in your setup that would cause a large loss of money.

We decided to run a 2 day “pilot” survey to discover any problems ahead of the full survey.

From just 17 pilot responses we discovered that law students were sharing the survey with their friends as an easy way to get a coffee card. This was a problem for us, because we were getting many similar responses. Read on to find out how we changed the survey to avoid this problem.

Who to involve in creating the Top Tasks list for your survey?

Your survey will ask people to pick their top 5 tasks from a long list of options. To think up good task ideas, you will have to recruit a team of people to help.

For your team, try to involve people who have contact with your clients at different stages of their relationship with your organization: sales, training, customer support, account service and billing. No need to involve more than 1 person from each organizational function.

Don’t be tempted to simply come up with a long list of tasks on your own. It’s fast to do it alone, but you’ll also miss some great insights.

At LexisNexis, I asked the following people to help think up tasks:

  • Scott, Team Lead for the customer support group
  • Marla, an Account Manger on the Sales team
  • Jeff, a member of our University relationship team. Former head of Customer Success & Training

Step 2: creating your long list of Top Tasks

The next step is the creation of a list of tasks that you will present to survey takers. You start by brainstorming a “long list” of hundreds of tasks and then reduce to a “short list” of 50 to 100.

Here is how to do this:

Create a first list of task ideas

Alone, start compiling a list of all sorts of tasks, questions and information that a person might want to accomplish on your site. Keep it openminded and sloppy – don’t try to write them out in a consistent format.

My task ideas came from the following sources:

  • The top recurring Customer Support inquiries (chances are, your Support team already has a list of “common items”)
  • Web navigation links from our e-Bookstore site
  • Tasks that our current website supports
  • My own brainstorming
  • A preliminary email to meeting participants, to gather some ideas
    (Ask people to send you their ideas directly, without carbon-copying others. You want “fresh” ideas that aren’t influenced by what others have said.)
  • Navigation items from the websites of our foreign sister-companies
  • Tasks from the websites of different competitors

Gerry recommends that you record the source of inspiration for each potential task. I didn’t find value in doing this. I could see that this would be useful in a situation where you must prove to someone that you did a thorough job.

Your own list might also include task ideas from:

  • Questions from users/prospects on social media
  • Postings on user forums
  •  “Free text” responses to customer satisfaction / “Net Promoter Score” surveys
  • Search terms from your own website/Knowledgebase (can find them through analytics software)
  • Search terms from Google Webmaster Console
  • Stakeholder interviews

Group session 1: adding more task ideas

The next step is to expand your list of tasks. Set up a meeting where members of your team brainstorm together. Don’t edit down the list just yet.

Ahead of the meeting, share your first list of tasks. Share it in a way that lets everyone see the latest version and makes it easy to add to it during the meeting.

In our case, I put the list of tasks on a shared Google Doc. Our meeting included a remote participant. Google Docs allowed him to see our task additions in real time, and to add his own as we went.

Begin the meeting with a review of tasks you have at this point. Then, have the group discuss the different types of stakeholders that they work with – what common questions do they ask? What common problems do they have? Have the group go on a meandering conversation and take note of any website tasks that are mentioned.

At the end of the meeting we had 123 proposed tasks. Including duplicates. You should aim for at least 100.

Group session 2: reducing the list of tasks

The next step is to reduce the long list of tasks you created.

On your own, start grouping related tasks together. When done, look for duplicates.  For every group of duplicates, come up with a phrase that captures all the meaning of the duplicate phrases in a succinct way. Group duplicates together in a way that still lets you see all the original task phrases – you will review them with the team.

Schedule another meeting with your Top Tasks team. This time, the goals of the meeting will be a) to spot duplicate items together (ones that you missed) and b) to review whether the team agrees with your classification of certain phrases as duplicates.

Carefully evaluate whether certain tasks are indeed duplicates, or whether they represent several different tasks that people are trying to accomplish.

Here are some examples of actual duplicate tasks we encountered:

ThisBecame this…
purchase books
A place to buy a book online with a credit card
buy a print product/compare prices/look for sales
purchase loose-leafs
Purchase a publication
live training (guided training)
support & training
training on how to use a product
Refresher training for using software
Where can I find training videos?
Finding training documentation
Finding training videos
Contact information for training
Contact information for training (hopefully a live chat capability)
look for user guides and training videos
training webinars
live training (guided training)
Training options
Resources for training

At the end of this 2nd session our list of tasks was down to 77.

Keep clarifying the tasks and reducing the list

After the team session, continue reducing duplicates on your own. Start rephrasing tasks so that they are written in a consistent short way.

You can find more tips on how to finesse your list of tasks at this post that contains notes from a Gerry McGovern seminar. Watch out for organization-centric language, product brand names, acronyms and tasks that are too lengthy.

Survey takers will have limited time to skim over the tasks, as they make their choices. Make the tasks “easy to skim”: start them off with the most “information rich”/distinctive word you can find. Instead of “find contact information” say “contact us”.

After going through this step, I shared my altered list with the workshop participants and got their OK. We asked for their input through 2 sessions, so it was only fair that they’ll have visibility into how this edited list looked.

Next, I recommend sending an email to a group of website stakeholders. Show them the latest list of tasks and ask them to email you with any additions before a certain date. You are not waiting to hear from every one of them – just giving coworkers a chance to be heard.

When I did this, we only got 7 extra suggestions from 3 people. But the main result was that 12 additional people felt engaged in the project.

After a final round of rephrasing with our Lead of Customer Insights, the final list had 62 tasks.

Some examples of how you might rephrase tasks to be clearer:

BeforeAfterWhat you’re accomplishing
status of my orderCheck status of my orderMore action oriented
Which tool will solve my problem?Which products fit my needs?Clarifying meaning
accounts and billingAccount information
Pay a bill
Breaking out 2 distinct tasks

If you work at a software company, be mindful that task phrasing must be clear about whether the task is to be done on your marketing site or inside the product that you sell.

For example: one of the main use case for our website is for people to simply log-in to our “cloud” software products. Many people don’t think about our corporate site and the online products as separate entities – that’s why many of their task selections made more sense in terms of their desire to perform them inside the products we sell. Next time, I would phrase tasks more explicitly.

Step 3: Set up the survey software

Now that we had our list of 62 tasks, it was time to set up the actual mechanics of the survey.

Recruiting participants

We were targeting actual website visitors. Reaching this group was very straightforward: we did this through a popup which linked to the survey page.

We offered a $5 coffee card to participants as a “thank you”, to make it worth their time to take the survey.

If you are looking to give away a gift card as an incentive, then keep in mind that some companies have guidelines for such giveaways. They usually require you to use specific words to describe the offer (example Starbucks policy, example Amazon policy)

The popup tool I chose to use is called GetSiteControl (link), and it provides a variety of features for about $20 a month. The popups are good looking, and there is a high degree of flexibility around how the popups show up. Here is how the popup looked:

We set it up to show for English-language pages, once a person has spent 5 seconds on the page, and to show it a maximum of 1 time per day.

Under these conditions, 7.29% of people who saw the popup ended up clicking through to the survey. 86% of those people then went on to answer at least 1 survey pre-qualifier question.

You can figure out how long to run your survey by using our conversion rates as a rough guide. Let’s say that you are looking to get 100 responses and your website has 10,000 visitors a month. To get 100 responses, at an 86% conversion rate, you’ll have to get 116 clicks on your popup banner. At a 7.29% conversion rate, you’ll have to show your banner to 1,591 people. With 10,000 visitors a month, you’ll get to show that many popups in 0.1591 months – about 5 days.

Collecting data

You’ll need to set up a survey that your end-users can take. One free choice is Google Forms. Other, affordable choices include SurveyMonkey and Typeform.

The most important technical consideration is that your survey software can randomize the “top tasks” list for each participant. This prevents a situation where people short cut the survey by randomly choosing whatever tasks are at the start of the list – and those first tasks accumulating a high number of votes because of the ordering. Also, check how easy it is to set up 100 different tasks as answers to your main question (you should choose a tool that lets you upload a big list of answers all at once).

 The LexisNexis survey was run through an excellent survey tool for large companies called Qualtrics

Prequalification question

We aimed to get a certain number of responses from each of our key user groups. Getting more responses than necessary wouldn’t have added to our insights and would have cost us extra coffee vouchers. To cap responses, the survey started with 2 prequalification questions at the start – only accepting people who fell into groups for which we wanted additional responses.

So, how many responses do you need to get for your survey?

We aimed to capture 50 responses in key customer categories, and 25 in others. In the European Commission poll (107,000 respondents), the top 3 tasks emerged after the first 30 votes were cast (page 20). The University of Glasgow had the top 3 student tasks emerge after 470 votes (page 3). Consult a statistician for the number that is right for you.

The “demographics” question

Your survey should ask a question that segments people into the major groups your organization cares about. Different groups will have different types of tasks they want to perform, and you should be able to report on these differences.

As an example, if you are running a top tasks survey for a hospital, it would be great to ask participants for their age. At LexisNexis, we asked participants to indicate what kind of firm they work at (or, whether they are a student)

The “top tasks” question

The main question asked people to choose their top 5 tasks from an overwhelmingly long list, under a time limit. The purpose of this setup is to get a “gut reaction” feeling for a person’s desired tasks. Here is the phrasing we used:

Please select the top 5 key pieces of information you would like to find, or actions you would like to take, on the LexisNexis Canada company website.

You have maximum 5 minutes to make your selection. Trust your first instincts.

The Customer Carewords team phrases their question differently:

Please look at the following list and choose the FIVE most important things that help people find what they need using search.

Give a score of 5 to the MOST IMPORTANT to you, 4 to the next most important, then 3, 2, and 1.

Please give ONLY one score of 5, one 4, one 3, one 2, and one 1. Leave the rest blank.

Please trust your first instincts and spend no more than 5 minutes on this exercise.

The top tasks were set up with a 5-minute countdown.

A live countdown timer isn’t strictly necessary –  Gerry’s surveys ask participants to limit their own time on the question to 5 minutes: (https://www.surveymonkey.com/r/FLM2CXS). You can also see it in a Cisco Systems survey:

Skip the countdown if the work of implementing it will slow down your project.

Ranking the tasks

Next, we asked participants to rank their selections, to show the relative importance of each task:

Please rank your top 5 choices according to their importance to you.
Please drag them into the box and rank them accordingly. Number 1 is most important, number 5 is least important.

Open ended question

A good idea that was put in at the last minute came from Paul, the VP of Marketing & Strategy: adding open ended questions to the survey.

We asked people why they’re on our website right now (as opposed to the kind of theoretical tasks they’d like to perform on the site). This question gave us further relevant information to analyze.

Asking people about their theoretical most-important tasks and asking them about why they’re on the site right this moment are two different things. Unfortunately, the current task a person is thinking of influences their “theoretical” top tasks list. You should think about how to separate the two. In my survey, I got people to pick their top 5 “theoretical tasks” before asking them about their immediate task – I didn’t want to bias their top 5 task selection by reminding them of their immediate task. I’m sure there is room for improvement here.

Many firms use the top tasks survey to recruit participants for a follow-up round of user interface testing. The European Commission got 40% of survey takers to sign up for future testing.

At LexisNexis we added a “would you like to participate in a follow-up survey” question and had 68% of participants say “yes”. This gave us a pre-built list of participants for a follow-up round.

Why would you ask survey participants to take another survey?
As you build a new information architecture for your site, you will probably want to test your architecture with real users.

You can test your new navigation structure by creating a mock-up of it, and doing “first click testing”. This type of test reduces the risk of launching a navigation structure/ IA, and then having it fail under real-world conditions. Tools like Verify and UsabilityHub make it easy to check if people understand your navigation structure.

Remember that we promised participants a $5 coffee card as a “thank you”?

In the survey, we had to give people an option to decline the gift card. In Canada some government agencies forbid employees from getting any kind of gift from an organization that’s pursuing a government contract. Your jurisdiction might have similar rules, so be aware that you might need to add a “decline the gift” option to your survey.

Step 4: Running the survey

Now your survey is ready for launch. As it runs, continue monitoring and adjusting it. Your work doesn’t end when you hit the “launch” button.

At LexisNexis, we ran a 2 day pilot survey from May 9 to May 10. The aim was to reduce the possibility of visitors abusing the survey (because gift cards were on the line) and to detect any errors in the survey’s setup. We were going to do this by collecting about 10 responses and learning from the experience.

What we learned from the pilot:

Within those 2 days, we got 7 survey responses from the same organization (same IP address). All were articling students working in the same firm. I suspect they took the survey together because one person told their friends about the easy $5 card that they could get from filling the survey. This was a problem for us because we wanted to get responses from a wide range of people at different firms.

I aimed to make sharing the survey unreliable, and to have enough of a window during working-hours to block IP addresses that submitted multiple responses.

To prevent a “tell all your friends” scenario, I adjusted the survey invitation popup to:

  • Run from 8am to 9pm on workdays
  • Show randomly to only 75% of site visitors
  • Appear only 1 time in a day for any given person

Insights from running the survey

Open-ended questions

Adding an open-ended question to the survey was very useful. Originally, we did not intend to add one, but at the last moment we included “What is the main reason you are on the LexisNexis website today? What are you looking to accomplish?”. This question helped us spot problems with the core Top Tasks question. When there was a major mismatch between a person’s Top Tasks and the goal of their current visit, it was a sign of a potential problem with our survey design.

Ambiguous task phrasing

As mentioned before, one issue we discovered was the ambiguous meaning of some top task terms. The same task could be interpreted as taking place inside our software tools, or as taking place on our corporate marketing site – we didn’t know how the respondent interpreted the task phrase. The solution is to be mindful of this potential problem and to have crystal-clear phrasing on top task options.

How appropriate is your incentive?

Another factor that made it difficult to get a broad number of participants is the $5 “thank you” coffee card. This was a survey of Lawyers, legal professionals and law students. For a student, getting a $5 card would be a great reason to take our survey. For a lawyer earning $500/hour, that’s not such an appealing perk. Also – some people don’t like coffee, or that specific coffee chain – so they are less likely to take our survey. I suspect that this giveaway resulted in more student respondents and fewer established-professional respondents.

To be clear: I don’t think that the responses were biased, simply that we filled our quota of student respondents quickly and would’ve had to wait longer to get the same number of responses from seasoned lawyers.

If you have difficulties with getting the right number/type of respondents, I recommend reading the University of Glasgow’s writeup of their recruiting experience (page 3). Here is a graph showing how different activities impacted the number of respondents. Ultimately, the University got 1,074 responses in 5 months.

Some ideas for how you can recruit survey participants:

  • Popups on your site (with or without an incentive)
  • Sweepstakes (beware special legal conditions)
  • Posting to your social media followers
  • Pay for online ads (great for recruiting people who haven’t heard of your organization)
  • Announce the survey in a direct email / include in an e-newsletter
  • Coverage in your physical newsletter
  • Printed insert in physical packages that you mail out
  • Announcements on your intranet
  • Physical kiosks on your properties, where people can fill out surveys on a tablet
  • Physical survey at an industry conference

Ultimately, the LexisNexis survey got 261 full responses. It ran from May 9 to June 30, 2017.

Step 5: Get insights from your data. And use them.

Now that you collected your Top Task survey responses, it is time to get some insights from the data.

Tasks that are important to many people

The first place where every Top Tasks analysis starts is adding up the votes that your survey participants gave to each task. This gives each task a score that lets you rank them by order of importance.

At LexisNexis we did this scoring in a simple way: each time that a person picked a task as part of their “top 5”, we gave it a score of 1. This means that the 1st most important task to a person, and the 5th most important to that person were scored the same. There are different ways to do the scoring – Gerry McGovern recommends giving 5 “points” to a person’s most important task, and only 1 “point” to their least important one. You can learn more about the consequences of the scoring system you choose at https://measuringu.com/top-tasks/  in the section “Different ways of calculating top tasks”.

You will likely find out that the few most popular tasks are important to a very large portion of your users. This is what Gerry calls the “Long Neck”. Discovering those few “tasks that you have to do right” is the main appeal of the Top Tasks approach. It helps you focus on ones that play a much bigger role than the others.

This is a graph of how the votes for tasks on our site were distributed:

Distribution of scores at LexisNexis: similar to what Gerry was outlining, but not as extremely logarithmic. There’s a fat middle.

Compare this distribution with what the European Commission found:


Out of 62 possible tasks there were 6 tasks that were voted in the top 24% of all priorities. Overall, the 15 most popular tasks got 50% of all the votes for Top Tasks.

The most important tasks had to do with exploring features, understanding pricing and getting training/support. This kicked off an internal discussion around transparent pricing. The data was also a personal wake up call. Before the survey, I felt that my energy should be spent on parts of the site that drew in new customers. The survey showed that serving existing customers through training and support was more important to focus on.

Least important tasks

Aside from highlighting the most important tasks, the survey shows you the least important ones (sometimes called “tiny tasks”). These are often tasks that are important to the organization, but not to the users: things like the company’s history, press releases, leadership profiles etc.

Note that just because a task is unpopular, doesn’t mean that it is unimportant or that it should be hidden. For example, at LexisNexis, the tiny tasks were important to professors who are including LexisNexis software in their courses – a group that’s very important to accommodate.

You should aim to create a navigation structure that makes it very easy to perform popular tasks, while still making it possible to perform tiny tasks.

Tasks that appear together

I recommend exploring which tasks get frequently chosen together in the “top 5”, for a given person.

You can do this using a “network map”, like this one:

Each task is a “point”, and the lines between tasks represent the number of times that the 2 tasks were chosen together. (For example: a pair of tasks that was chosen together 10 times will have a 10-pixel thick line between them. A pair that was only chosen together once will have just a 1-pixel line between them).

You can do this analysis with free network-visualization software called Gephi. It takes quite a bit of Excel wizardry to import your survey results into Gephi. Here is a quick guide to the terminology you’ll encounter:

Node: the task’s name

Edge: the connection between two tasks. An Edge connects 2 Nodes – and is the entity that represents the fact that 2 tasks were chosen in the “top 5” together. The type of Edge you will need to set up is called “Undirected”.

Weight: the number of people who chose a given pair of tasks together in their “top 5”.

So, if 20 people chose “features” and “pricing” in their top 5 tasks list. Then there are 2 Nodes (“features” and “pricing”), interconnected by an edge, which has the Weight of 20.

This analysis showed us that there were 3 major groups of tasks that commonly occur together. Some tasks were part of several groups. Those tasks – ones that interconnect major groups – can serve as a jumping off point for introducing people to a variety of related content.

For example, your “product features” page could serve as a branching-off point for content that serves your existing customers (support for a specific feature, training on it) and for other content that’s aimed at potential buyers (benefits + screenshots).

Relative tasks importance

Up to this point, we looked at analyzing the data from the perspective that every “Top 5” task selected by a visitor is equally important. However, we did ask people to rank the tasks in order of relative importance. This lets you learn about the relative importance of tasks to the people who chose them.

Here are a few ways in which to think about relative importance of tasks:

Important to a small group

These are tasks that were ranked “most important” by a small group. They usually belong to niche groups that use your site. At a software company, these visitors could be the IT People maintaining the software for the main customer. At a university, these could be the students’ parents, who interact with the site in a very different way than students.

Low-importance, to a large group

You might find that people included a set of tasks in their list but chose them as the “least important of the top 5”. At a business that makes clothing, these might be a task like “how environmentally friendly is the production process?”. You should address this kind of task on your site, but it is more of a “box to check” on a visitor’s checklist, than a major item in their journey.   

One way to address these tasks is to mention them on key pages, but to hold back from creating a lot of dedicated content around them.

Important, to a large group

This is not necessarily the same as your overall “top tasks”. In our case, they tended to be administrative tasks like “log in to the software”. Make sure that these tasks are easy to perform on the site.

Step 6: use the report

Once you finish making conclusions from your data, it is time to drive some action.

Your Top Tasks analysis is a tool that helps your organization accomplish a goal. You can use it to champion the interests of end-users during a large project (such as changing an interface or improving user experience).

If you’ve ever tried to advocate for customer interests in the past, you might have seen how difficult it is to get people to focus on the customer. Your personal expertise will only get the team to a certain point – beyond that, they need to be convinced by someone more authoritative. Your Top Tasks analysis could be exactly the kind of authoritative data that would show what customers are thinking.


When your analysis is complete, decide how you will use the data. Simply preparing a report is not enough – to drive action, your data needs to be actively “sold” to people in your organization.

Answer these “5 W” questions:

  • Why were these Top Tasks insights needed?
  • When and where would you like your analysis to be seen? (This determines the format of your final outputs.)
  • What is the minimum information someone needs to see so they understand your point?
  • Who needs to see this data?

Tip: Always note the number of people who responded to your survey. It lends credibility to the data and shows why people should take your analysis seriously.

Here is how I used the Top Tasks data in our information architecture project at LexisNexis:

Why was the analysis commissioned?

This analysis’ intention was to represent the “voice of the user” during website planning discussions.

When and Where will it be used?

The analysis was not the focus of our short discussion sessions. This meant that there would be limited time to present the findings.

As a result, I prepared a short presentation with quick insights:

  1. The first 6 top tasks (representing 25% of votes)
  2. The large number of survey participants (this established how solid the data is)
  3. What interests are unique to each group of visitors.

I also gave out a printed copy of the full report to each participant – in case they needed deeper insights during the meeting (and for easy reference during disagreements on a point of contention).

Who needs to see your data?

Besides project participants, I also gave printed copies to key stakeholders outside the project. The reason form making so many printed copies was to ensure this information won’t be lost on a shared drive. It would survive longer as a physical object.

I shared a digital version with interested parties at our similar “sister companies” outside of Canada. This was a way of getting more benefits from our investment in the survey.

What is the minimum analysis you need to make your point?

Although the full report contained several separate analyses, I presented the key insights in an executive summary that was broken down by “groups of interest”. This let busy Vice Presidents quickly see the conclusions relevant to them, if they just wanted an overview.

The absolute smallest insight from the survey was a table of the top 6 overall tasks. This was short enough to put in the body text of an email or meeting invitation. This was a way of showing the Top Tasks to people I communicated with – even if they weren’t going to open the attached report.


At LexisNexis, the Top Tasks survey led to 3 main benefits:

  • It woke me up to the importance of serving existing customers by emphasizing training and support content. As a result, these 2 types of content were made easier to find and use.
  • It started a discussion around a key part of the sales process that was not addressed on the website at all. This activity was task #2 in the survey results.
  • We learned that we could do better on the task that was #3 in importance. The network analysis of inter-related tasks showed that this content was important to several distinct groups of visitors.

How long will it take you to run a Top Tasks Survey?

I can’t predict how long it will take your specific organization to complete this type of survey. But, I can give you a general idea.

At our organization, it took almost 6 months to run the survey (261 responses, in a business-to-business setting). Planning started on February 9, 2017 and the final analysis report was finished on July 27, 2017.

Here is a timeline of the specific activities that took place and how long they generally took.

The larger-scale University of Glasgow survey, with 1074 respondents, took about 5 months just to collect survey responses (12 June to 6 November 2017). The analysis portion appears to have taken at least 2 months.

One Customer Carewords presentation indicates that it takes their team from 9 weeks to over 3 months to perform an entire Top Tasks test. This is a much shorter time than what it would take an in-house team. I believe that they’re able to perform the setup and analysis quickly because of their previous experience with this type of survey. Also, their team can focus 100% on running your survey (where an in-house team has other duties that demand their time).

Customer Carewords timeline for running a Top Tasks survey

Helping your organization see beyond its own interests

Do you feel pressured to highlight someone’s “pet content” on a website, or to add “pet features” to a product?

If so, then a Top Tasks survey can be a tool that brings the focus back to end-user priorities. Consider running a separate Top Tasks survey just for employees of your organization, asking them to choose the tasks they think that customers will choose as their “top 5”.

When you compare the internal results with the public-facing ones, you will see which tasks your team thinks are valuable but are actually unimportant to customers. For example, such a comparison on the OECD website showed that the task “Overview of what the OECD does” was 4 times less important to customers than their employees thought.

A note of caution when highlighting top tasks on your site/product

Once you know the most important tasks for your website, you will be tempted to just list all of them in a “popular tasks” area on your site.

Don’t do it!

Gerry McGovern has a great writeup on how a UK organization called Citizen’s Advice put all their top tasks into a “quick links” box on their homepage. As a result, every visitor expected their immediate task to be present as a “quick link” and it became a general catch-all category for any task whatsoever.

When you restructure your site and navigation to account for Top Tasks, remember that the navigation needs to flow logically for all tasks. Those less popular tasks (called “tiny tasks”) are still a top priority for someone. They need to be discoverable with a reasonable amount of effort.

For example, the University of Glasgow team acted on their Top Tasks findings changing the kind of content they highlight on the homepage. These adjusted areas make it easy for visitors to see important tasks. (https://medium.com/uofg-ux/top-tasks-management-part-1-64bf071fc83f)

Beyond Top Tasks

What do you do after you restructure your site to fit users’ Top Tasks?

There are several other techniques that you can read about. Here are my recommendations

Go and do it!

Now that you know how LexisNexis and other organizations have run Top Tasks surveys, you are ready to try it yourself.

If your organization considers “speed of access to information + task performance” a core activity (due to, for example, great pressure to make services “self serve”), then I encourage you to undertake a Top Tasks analysis on your own. Building out your team’s ability to run these types of surveys will be an investment of time that will bring benefits over and over in the future. Typical organizations that should consider doing this type of project with their in-house staff:

  • Municipal, provincial and federal government agencies
  • Educational institutions
  • Tourism organizations
  • Software companies with a large base of “legacy” users
  • B2C companies that operate at scale (Walmart, social networks)

If speedy access to information + tasks is a “nice to have” in your organization, then I recommend that you engage external consultants on a one-off basis to perform a Top Tasks analysis. There will be limited benefit to trying to build out an internal team that’s capable of running these kinds of surveys.

Examples of such organizations:

  • Monopolies and duopolies, where ease-of-use is not a “killer feature”
  • B2B companies that make most of their money from just a few large customer companies

Thank you for setting aside time to read this document. If you have questions or comments, email me at  j @ <this site>.

Default field mappings (Pardot and Salesforce integration)

Here are some notes about the initial sync that occurs between Pardot and Salesforce (SFDC).

When you first enable and validate the Salesforce Connector in Pardot, a synch will begin automatically. You will not have a chance to define how fields sync between Pardot and Salesforce, and how data overrides work – you are at the mercy of the default mapping that’s been preset for you.

Here is the official document that explains how Pardot fields are mapped to Salesforce fields by default.

Don’t like how the default sync is set up? Want to have total control of the initial mapping between Pardot and Salesforce?I recommend that you do 3 things:

  1. Configure the connector so that Salesforce is prevented from creating new Pardot Prospect records (at least initially).
  2. Ensure that the Salesforce user who’s been set up as the Pardot Connector User is blind to all Leads and Contacts. Your Salesforce administrator will have to set this up prior to verifying and enabling the Pardot Connector.
  3. If you have any assigned Prospects in your Pardot instance, mark them as “Do Not Sync With CRM” using an automation rule. Anyone who’s assigned will be automatically created as a Lead in Salesforce when you verify the Connector (this will happen because step #2 ensures that Pardot will never find an existing Lead for an Assigned prospect – they’re all hidden. So it will attempt to create a new Lead).

These setting will ensure that no Prospects are synched initially. This gives you time to configure the kind of integration mapping that you desire.

Remember that you can do a “practice integration” by connecting Pardot to a Salesforce Sandbox, and even by getting a free Pardot Training Environment to protect your main Pardot during integration testing.


In addition, here is a table that shows field IDs for the default mapping for our organization. Some of these field types might depend on our specific Salesforce setup:


Pardot field name Pardot field ID salesforce.com Field Name Type In case of conflicting values:
Years In Business years_in_business Text
Website website Website Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Territory territory Text
Source source LeadSource Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Scoring Category Last Scored At last_scored_at pi__Pardot_Last_Scored_At__c Date If Pardot and Salesforce values differ when data sync occurs: Use Pardot’s value
Salutation salutation Salutation Dropdown If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Province state MailingState Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Postal Code zip MailingPostalCode Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Phone phone Phone Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Pardot Hard Bounced pardot_hard_bounced pi__pardot_hard_bounced__c Checkbox Pardot is the master. Map this lead field to the contact field so the contact record pulls in the data. This field is hidden until a hard bounce. Even if the Do Not Email and Opted Out fields are cleared, the bounce history is retained on the prospect record and the Pardot Hard Bounced, Email Bounced Reason and Email Bounced Date fields are not cleared. If the hard bounce was the result of an invalid email address, adding a valid email address clears the Pardot Hard Bounced field.
Opted Out opted_out HasOptedOutOfEmail Checkbox More about Opt-Out synch behaviour
Last Name last_name LastName Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Job Title job_title Title Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Industry industry Industry Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
First Name first_name FirstName Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Fax fax Fax Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Employees employees NumberOfEmployees Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Email Bounced Reason email_bounced_reason EmailBouncedReason Text More details in the official field mapping docs
Email Bounced Date email_bounced_date EmailBouncedDate Text More details in the official field mapping docs
Email email Email Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value

Read more about how email address changes are synched

Do Not Email is_do_not_email Checkbox
Do Not Call is_do_not_call Checkbox
Department department Department Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Country country MailingCountry Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Company company Company Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Comments comments Textarea
City city MailingCity Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Annual Revenue annual_revenue AnnualRevenue Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Address Two address_two MailingStreet Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value
Address One address_one Street Text If Pardot and Salesforce values differ when data sync occurs: Use Salesforce’s value

The Pardot documentation has more information about standard sync behaviours.

And, this document shows how to change the sync behaviour for a particular pair of fields. Keep in mind that data priorities between Pardot and Salesforce cannot be changed for certain fields (For example, the “Opted Out” field – although that field can be remapped). These special rules are explained in the official field mapping documentation.

Finally, you should be aware that – regardless of the sync behaviour you’ve set up – Pardot will never use a blank value to overwrite an existing field value. The nuances of how Pardot acts on blank values are explained in this post from The Spot for Pardot.


Search keywords n-gram analysis tool

This is an n-gram analysis tool for search keywords from Google Webmaster Console. Simply copy the query and frequency (of clicks, or impressions) into the “Input field”. What makes this tool unique is that it takes into account the weighed frequencies of a term’s appearance in your list.

For example, if you are calculating an n-gram of 2, and your query click counts are:

  • plumbers in Toronto (22 clicks)
  • plumbers in New York (11 clicks)

Then “plumbers in” will have a total weight of 33 clicks.

[video] Online marketing tips for small businesses

On August 31st, 2017 I was invited to present to the StartUp Here Toronto community. I spoke about online marketing tips for new businesses. A recording of the presentation is available below:

Electrical bill for 100 Wellesley St. East – Toronto, Canada

How much can a renter expect to pay for electricity in Downtown Toronto?

Back in January 2016, my pregnant wife and I moved in to a new 2 bedroom apartment at 100 Wellesley St. East in Toronto. Soon after, we got our first full electrical bill: it was a $327 surprise.

This was very different from the $36/month bill at the 1-bedroom downtown apartment we just left.

This post is about sharing our real electrical bills with you. To show what an ordinary downtown renter could expect to pay.

We wish that we had known about the size of the electrical bill ahead of renting our apartment. As renters, we have zero choice in electricity providers – the company that set up a building’s original submeter wiring gets to set whatever renter fees it wants, and it is a “take it or move elsewhere” type of arrangement.

A Sample Bill

Here is an example of the type of charges that appear on a typical electrical bill (nubmers are from Jan 1, 2017 to Feb 1, 2017).
Hover over the “Charge” item with your mouse to see the explanation Wyse’s website gives for the charge.

Type of charge Amount
Delivery for 01/01/17-02/01/17 69.78
Electric Customer Charge for 01/01/17-02/01/17 0.06
Energy Charge for 01/01/17-02/01/17 51.59
Energy Charge 2 for 01/01/17-02/01/17 173.97
Regulatory Charges for 01/01/17-02/01/17 12.56
Service Delivery Fee for 01/01/17-02/01/17 21.66
HST # 832218960 RT0001 for 01/01/17-02/01/17 41.09
SSS Admin Charge for 01/01/17-02/01/17 0.26
Wyse HST for 01/01/17-02/01/17 3.96
Electric Meter for 01/01/17-02/01/17 4.37
Electric Meter 2 for 01/01/17-02/01/17 4.38
Line Loss Adjustment for 01/01/17-02/01/17 7.81
8% Provincial Rebate for 01/01/17-02/01/17 -25.27
8% Provincial Rebate – Wyse for 01/01/17-02/01/17 -2.43
Total $363.79

When signing the rental agreement, we paid a one-time $20 account setup fee and an energy deposit of $75.

Total Monthly Electricity Bill

Here is a graph that shows the total monthly electricity bill that we received in each month. Note that electrical charges for winter and spring are much higher than for summer and fall.

Late fees, and one-time setup charges were removed from the total.

It is great that Wyse provides an online portal that lets us download detailed billing and usage data. The raw bill data, as well as daily electricity consumption data is available in this Excel file for your reference:

2017-07-24 Wyse utility readings.xlsx

Note: we moved into the apartment on Jan. 20, 2016. Before that, there was some work being done by contractors. Because of that, the first “normal” month of usage will be February – the bill for that usage will appear in the chart above under “March”.

Daily Electricity Usage

Below is a graph showing daily electricity consumption for our apartment. The data goes back only as far as part of July 2016.

Attempts at Reducing the Electrical Bill

In attempt to reduce our whopping winter bills, we tried the following:

  • Up to about April 2016, we left our heat on even when we were away from the apartment. After that time, we started turning off the heat when we left.
  • Acquiring 2 oil-based space heaters, and using them instead of the AC units connected to the thermostats.
  • On Sept. 11, 2016, we bought heat insulating curtains for the large windows in the living room. The idea was to prevent heat leakage through the glass.
  • Sometime around January 2017, I taped any cracks in all windows using insulating tape. I also taped the edges of the balcony door with insulating foam tape.

Unfortunately, despite all the extra expense and work we put into saving energy, our electrical bills were even higher in 2017 than they were in 2016.

The 2017 bills for May, June and July were higher than in the previous year. This might be due to the fact that the 2016 and 2017 bills cover different date ranges (April 15-May 16 in 2016, April 1-April 20 in 2017). It is possible to get weather data for that timespan from the government of Canada and compare downtown temperatures. A quick look at the May data shows that, indeed, days in the 2017 billing span were colder than in the one in the billing span for the previous year.

Basic Information About Our Unit

The company serving this apartment building is Wyse Meter Solutions.

Our unit is electrically heated, is submetered, and is hooked up to 2 meters. That means that we have 2 thermostats that we can use to heat/cool the unit as much as we wish. We are billed directly for our own electricity consumption.

We do not have any unusually electricity hungry appliances. Just the standard fridge, microwave and oven. The fridge is old and may be less efficient than average.

The unit is a 2 bedroom corner unit. That means that 2 sides of the unit have walls that are exposed directly to the outside, and an additional wall of the Master Bedroom is shared with the emergency stairwell – which is colder than the corridor. This kind of unit would have higher heating costs than a unit in the middle of the building. That kind of unit would have only 1 wall facing the outdoors.

Proof – electrical bill pictures

For reference, here are pictures of the electrical bills who’s data I analyzed in this post:


Now share your story…

How does your electrical bill compare to what you’ve seen above?

If you live in an apartment, use the Comment field below to share your electrical charges. Please indicate which month the charge is for, which area you live in, and the number of bedrooms.

If you have your own website and have posted about this topic, please share a link to your article!

Creating a Choropleth with Google Maps and GeoJSON

This tutorial will teach you how to create a choropleth map that can be posted on a website. A choropleth map – also known as a heatmap – allows you to divide a map into geographical areas, and give each area a color that corresponds to a measurement in that area. For example, you can visualize crime in a city by making a neighbourhood dark red if more than 40 criminal acts occured there in the past year, yellow if 20-39 criminal acts took place, and green if 19 or less crimes were committed (see here for an example). Choropleths allow you to discover links between pieces of data and their geographic distribution.

Here is an example of what a choropleth looks like:


Here is the final visualization that we will produce at the end of this tutorial.

The Technology Involved

To build our Google Maps heatmap, we will be using:

Google Maps API for generating a map and visualizing colored areas. We will be adding a Data Layer on top of a regular map – here is a tutorial that gives more information about the Data Layer.

QGIS software for processing and transforming map coordinates, and for enriching a map with custom data. QGis is an open-source tool for working with cartographic data, and you can download it at the QGis site.

GeoJSON for feeding rich data into the Google Maps API. GeoJSON is a way of representing map data in a compact way, especially when publishing maps on websites. Wikipedia has a good overview of GeoJSON.

Intended Audience

This tutorial is intended for people who need to generate a map with colored areas for publication on a website. A degree of HTML/Javascript knowledge is good, but you do not need to know anything about mapping or GIS (Geographical Information Systems) in order to follow along. This document will be especially useful if you need to create a choropleth for areas outside of the USA. If you are in the USA, it will probably be faster to do your mapping with tools like Google Fusion Tables or Choropleth.us. Finally, if you have the option of paying for a mapping tool, then CartoDB or Tableau could allow you to generate the map faster.

Feel free to skip sections that deal with areas that don’t interest you – such as the sections on acquiring a data set, processing the data in Excel, or setting up the map with Javascript.

1.Download statistics data

The data set that we’ll be visualizing comes from Statistics Canada. This health data was collected as part of a survey called the Canadian Community Health Survey (CCHS), and some other sources.

Download the data by going to CANSIM (http://www5.statcan.gc.ca/cansim/a01?lang=eng) and entering table number 105-0501 in the search box. You’ll see a summary of the data we’re about to download. Next, go to the “Download” tab at the top. You’ll have 2 options:

  • Download only the summary data that you saw on the previous screen.
  • Download the entire data set, with all columns

Choose the second option


Processing the Data

When you open the CSV file you downloaded, you’ll see that health measures are broken out according to “Health Regions” ( http://www12.statcan.gc.ca/health-sante/82-228/help-aide/Q01.cfm?Lang=E ) – these are administrative areas that are smaller than a province but bigger than a city.

Column A contains a numeric code for each region.


Let’s single out the health attribute “Sense of community belonging (83)” for visualization in our choropleth. This measure is the percentage of the population aged 12 and over who reported their sense of belonging to their local community as being “very strong” or “somewhat strong”.

We are aiming to create a very simple CSV file that contains:

  • Health Region code
  • Health Region name
  • The percentage of people who replied “very strong” or “somewhat strong” to the “Sense of community belonging (83)” question.

To capture theat last item, we need to make sure that we’re getting the data under row N – “Rate_Total” – for the question we are exploring. The file contains data that deals with specific age ranges, or greater regions – like the province of Nova Scotia – which is a summary of Health Region results. We will not be using these other fields.

Cleaning up the data

In order to build our simplified CSV, we are going to filter the data according to certain criteria. Select the range from A1 to N6262, click “Data” at the top of the Excel Ribbon, and click the funnel shaped “Filter” icon.

Next, click on the little funnel icon beside the column names and filter out the data as follows:

  • Column A: hide any codes below 1000 (these are redundant groupings of Health Regions. We want to work only with the HRs)
  • Column E: keep only the item named “Sense of community belonging (83)”


Finally, create a new spreadsheet, and paste the visible values in columns A, B and N into it. Change the row heading “Rate_Total” to “Feel Belonging”, so that we can present the numbers clearly as the percentage of people who feel community belonging in each Health Region.

Save the file as a CSV. Here is the processed CSV file that I ended up with: CCHS-community-belonging-metric.csv .

Double check your work

Your new CSV file should have 182 health regions (plus one row for the column headings). Only 159 of the health regions will have any value under the “Feel Belonging” column. The file will look something like this:


Further exploration

If you’d like to understand exactly what each item in the spreadsheet means, and how it was gathered, you should read the following documents:

2. Get QGIS and Health Region boundaries

Now that we have a neat CSV that describes how people in each Health Region feel about their community, we need to start thinking about visualizing this data on the map.

At this point, we’ve gone as far as Excel alone can take us.

In order to visualize data on a map, you need to download a GIS software package. The industry standard software is called ArcGIShttp://www.arcgis.com/ , and a search of their online store ( http://store.esri.com/esri/) reveals a price range from a $100 noncommercial license to a $2,500 online subscription.

For the purposes of this tutorial, we will use the free and open source QGIS software. Download and install the latest version from http://qgis.org/en/site/ .

The examples and screenshots you will see here come from QGIS version 2.8.1 (Wien).

Now that you have QGIS on your computer, you need to let it know what the outline of each Health Region looks like on a map. QGIS has no “out of the box” way of knowing where HR boundaries start and end, so we need to download these definitions from Canstat.

Visit the following URL: http://www.statcan.gc.ca/pub/82-402-x/2013002/reg-eng.htm

Download the topmost “Canada” file under the “MapInfo” section. These are the outlines of the HRs from October 2013.


At this point, if your aim is to do a proper analysis of the CCHS data, then you will need to return to your CSV file and remove any HRs which are present in the Excel data but are not present inside the MapInfo file.

To see the full list of attribute IDs in the MapInfo file, open the file in QGIS as a vector layer (this is explained below). Then, right click on the layer and select “Save As”. In the settings, save it as a CSV file – only the text attributes related to each HR will be saved. You can then use the Excel vlookup() function to detect regions that are present in the Excel CCHS file but not in the MapInfo file.

3. Create QGIS File + CSV mapping

Let’s review what we have so far:

We have a CSV file with a metric (“sense of community belonging”) that ties to each numbered Health Region, we have a MapInfo file that describes the shape of each HR on a map, and we have QGIS – which we’ll use to combine the two on one map.

Start by opening QGIS. Your goal is to create a Layer, which will contain the data from the MapInfo file that you downloaded from Statscan.

Go to “Layer” -> “Add Layer” -> “Add Vector Layer”. Browse for the Health Region file you downloaded, and make sure to select “Mapinfo File” from the file extension selector at the bottom right.

The official instructions are here: (http://docs.qgis.org/testing/en/docs/user_manual/working_with_vector/supported_data.html#loading-a-mapinfo-layer)

Your screen should look something like this:


Combining the geographic information with the CSV data

Just like we have loaded in a “Vector Layer” with shapes, we will load in a “Delimited Text Layer” with attribute data. Before we can do that, we need to create a special file that will tell QGIS about the type of data that’s in each column of our CSV (is it text, integer, or floating point?).

Follow the instructions here to create this file, with a “CSVT” extension: http://anitagraser.com/2011/03/07/how-to-specify-data-types-of-csv-columns-for-use-in-qgis/

The CSVT file for our basic CSV file will contain the following content:


The HR code is an integer, the English name of the HR is a string, and the percentage of people who feel a sense of community belonging is a decimal (real) number with 2 digits to the right of the decimal point.

Make sure that the CSVT file name is the same as the CSV filename, aside from the extension. QGIS needs them to be the names to match up.

Once the CSVT file exists, go back into QGIS and load up the CSV. You should do this by going to “Layer” -> “Add Layer” -> “Add Delimited Text Layer…” Choose the CSV file, and set up the window that comes up with a CSV file format as follows:

  1. Choose an encoding of CP1252. I needed to choose this encoding, on a Windows 7 machine, in order for French language names to appear properly. You can also leave it as UTF8 if you are working with English-only data.
  2. Indicate that the first row of the file contains field names, not data.
  3. Specify “No geometry” as the final option.

When you finish, a second layer should appear in the Layer view on the left sidebar:


Connect the Data and the Vectors

In order to tie in the “community belonging” metric to a specific shape, we need to join these two layers.

Follow the following steps:

  1. Right click on the vector layer (it will be the one with a colored square next to it). Click “Properties”.
  2. Click on the “Joins” option towards the bottom on the left. This is where you attach supplemental information to the geometric figures that describe Health Regions.
  3. Click the green plus sign to add a new Join.
  4. Choose the CSV data layer as the “join layer”, and use the “Code” and “HR_UID” fields as the join fields. These are the two fields, in the CSV and the Vector Layer that have the Health Region ID.
  5. Press “OK” to confirm your settings.

Let’s confirm that the join worked:

Go back to the window where you chose the “Join” icon. Above it, you will find an icon called “Fields”.

Verify that you can see a field called “CCHS-community-belonging-metric_Feel Belonging. That would indicate that our vector layer, which came from a MapInfo file, now contains data from our CSV file.


More explanations of how to join CSV and Vector data:

Ujaval Gandhi http://www.qgistutorials.com/en/docs/performing_table_joins.html

Sake Wagenaar http://www.qgis.nl/2012/07/13/koppelen-van-data-uit-csv-bestand/?lang=en

4. Simplify the map – the tools that didn’t work

Save your work as a QGIS “QGS” file.

As you’ve been working on the map, you might have noticed that it is highly detailed. This is fine on our local computer, but is too much detail for passing to the Google Maps API. The smaller you can make your map file, the faster it will load on your site.

Below are pictures of a coastline that starts out with maximum complexity, and gets gradually more simplified. That last stage of simplifications is what we want for our map’s Health Region borders.

2015-06-06_0-47-27 2015-06-06_0-47-43 2015-06-06_0-47-56

Notes on built-in QGIS simplification

There are two built-in options for simplifying geography in QGIS, and a third option which is a third-party plugin. I found all 3 options unworkable for simplifying the Health Regions, but I’d like to list them here for completeness – just in case they will work for your projects. Click here to skip and read about the tool that did work.

The “Simplify Geometries” tool

You can access this tool by going to “Vector” -> “Geometry Tools” -> “Simplify Geometries…”


My main challenge with this tool was that the simplification algorithm did not give me the ability to fine tune the degree of simplification on the fly. The edges of the Health Regions were either not simplified enough or were over-simplified, resulting in strange jagged edges like the following:



GRASS is an open-source toolset that comes bundled with QGIS. You can read more about it here: http://grass.osgeo.org/

The simplification module of GRASS is called “v.generalize”. You can access it in QGIS by going to “Processing” -> “Toolbox”. A bar will appear on the right of the screen, simply enter “v.generalize” into the search form.


My difficulty with this tool, aside from finding the “right” simplification settings, is that it works very slowly and sometimes throws up errors. In certain instances, I can’t see any difference between the original and the simplified polygons. I was using GRASS version 6 though, and you might see better results if you download the latest version 7 package.

The SimpliPy plugin

This is a 3rd party plugin built with Python. SimpliPy’s homepage is here: https://plugins.qgis.org/plugins/simplipy/

You would install it in QGIS by going to “Plugins” -> “Manage and Install Plugins”, and searching for SimpliPy.

My issue with this plugin is that, at times, it throws up errors that are unclear and uninformative. When an error comes up, the only way to re-run the plugin is to totally restart QGIS. Also, I was not able to see the layer which contains the simplified polygons that this tool generates.

5. Simplifying with Mapshaper

You are welcome to try out the 3 simplification methods listed above. They did not meet my needs, so I had to resort to a 4th method: a tool called mapshaper.org (http://www.mapshaper.org/).

Mapshaper is a web-based tool, which is also available for you to download and run locally on Node.js.

To simplify our map, we start by exporting our .QGS file (with the joined CSV attributes) as a .SHP file.

Before we can save our map as .SHP, we need to put the date into a specific “Projection” – we need to create a new Coordinate Reference System (CRS). This projection is the formula we use to take a 3 dimensional surface (the earth) and flatten it out to a 2 dimensional one (the screen).

Statistics Canada supplies us with the proper projection data that they used for the map, in the Projection Information section at http://www.statcan.gc.ca/pub/82-402-x/2015001/gui-eng.htm#a5

Unfortuately, I do not know how to set up a custom projection according to these parameters, so I ended up going to spatialreference.org, a website that records common projections and the software settings that can reproduce them.

The projection “NAD83 / Statistics Canada Lambert” seems like the standard one used by Statscan: http://spatialreference.org/ref/epsg/3347/

Now, in order to create this new projection inside QGIS, we follow some of the steps at http://gis.stackexchange.com/questions/20566/how-to-define-new-custom-projections-in-qgis/20568 to do the following:

In QGIS, go to “Settings” -> “Custom CRS”. Then, on the Spatialreference.org page, click on the “Proj4” link and copy the string you get.

Back in QGIS, name your new projection and paste in the Proj4 string:


For reference, the string is:

+proj=lcc +lat_1=49 +lat_2=77 +lat_0=63.390675 +lon_0=-91.86666666666666 +x_0=6200000 +y_0=3000000 +ellps=GRS80 +datum=NAD83 +units=m +no_defs

Hit “Ok” and you have your new CRS projection. If you look at the Proj4 string, it seems to exactly correspond to the parameters that are set out on the Statscan site.

Next, we will save the map as an ESRI .SHP file with our new projection.

Right click on the vector layer in QGIS, and click “Save As”.


In the Save menu, choose to save the file as an “ESRI Shapefile”. Choose the projection you just created, and save the file. Make sure to wait while QGIS is “thinking” during the save process – saving to the hard drive takes a while.


Next, go to Mapshaper.org. Choose “Visvalingam / weighted area” as your simplification method, and check the “Repair intersections” checkbox. Next, upload the .SHP file that you saved in the previous step.

Adjust the slider along the top to simplify the figures:


On the upper right, click the “Repair” link to repair any shape overlaps that have been introduced by the simplification process.


Export the file as a Shapefile. You will get a zip file with 3 files: .shp, .dbf and .shx.

Open the .SHP file in QGIS. You will have to select a CRS projection, and you should select the Statscan projection we created before. You will see a simplified map, but none of your “community sense of belonging” attributes will be in sight. Mapshaper has no ability to pick up on these attributes, so the map it generated is missing our community belonging data.

Our next step will be to re-add the attributes to the simplified file.

We do this by copying the .DBF file that QGIS generated when we originally saved the map as a .SHP file. The .DBF file contains all the data, while the .SHP file just contains geometric figures. (read more here: http://en.wikipedia.org/wiki/Shapefile ). Rename the .DBF file that was generated by Mapshaper to have “_mapshaper” in the filename. Rename the copied, old .DBF file to exactly match the name of your newly generated simplified .SHP file (be sure to keep the extension as .DBF).

You’re done – if you open the new .SHP file with QGIS, you will see the joined attribute data.

6. Save as GeoJSON with another projection

Open a new QGIS project and add the simplified .SHP file as a new vector layer. Now that it is open, we need to export the map and attributes as a GeoJSON file. GeoJSON is a file format that is taken in by Google Maps in order to represent our Health Regions.

Right click on the vector layer, and click “Save As”. Here, we will choose our Format as “GeoJSON” and our projection as “EPSG:4326“. This is the CRS projection for Google Maps, accordign to http://gis.stackexchange.com/a/60438 and https://developers.google.com/kml/articles/vector.

In order to find the EPSG:4326 projection, you may need to expand a menu of projections and to enter “4326” in the search box.


Next, set the Encoding to “UTF-8”. This will convert our special French language characters from the Windows-1252 encoding to an Internet-compatible encoding that will be understandable to any computer.

Save the GeoJSON file.

Here is the file that I ended up with: CCHS-geojson.geojson.

7. Creating the Google Maps page

Now that you have the areas of Health Regions, and the “sense of community belonging” attributes in one GeoJSON file, it is time to visualize this data on top of a map.

In order to create a Google Maps map, you don’t have to sign up anywhere. However, if you’d like to be able to see viewership statistics for your map, you can sign up for a Google Maps API key (https://developers.google.com/maps/signup)

Creating a “baseline” map

Start by creating a simple standalone HTML page with a <div> element that’ll contain our Google Map:


		<style type="text/css">
			 html, body, #map-canvas { height: 100%; margin: 0; padding: 0;}

		<title>CCHS Map</title>
		<div id=”map-canvas”></div>

Let’s add in the Google Maps javascript library inside the <head> tag:

<script type="text/javascript" src="https://maps.googleapis.com/maps/api/js"></script>

Next, create a basic map centered on Canada. This is a necessary step before we can overlay our data and Health Region outlines on top of it.

The different map settings are explained on the “Map Types” page in the documentation – https://developers.google.com/maps/documentation/javascript/maptypes#BasicMapTypes. The controls that overlay the map (like pan and zoom) are documented here: https://developers.google.com/maps/documentation/javascript/controls .

A brief explanation is also included in the comments that are inside the code.

<script type="text/javascript">

function initialize() {
 	var mapOptions = {
 		center: { lat: 55.293277, lng: -98.3730469},	// center on a latitude and longitude that results in Canada neatly fitting in the browser
 		zoom: 5, 										// the map’s zoom level
 		mapTypeId: google.maps.MapTypeId.ROADMAP, 		// this is a map that shows roads, not a satellite imagery map
 		panControl: false, 								// hide the pan controls that overlay the map
 		streetViewControl: false, 						// don’t allow visitors to enable street view
 		mapTypeControl: false 							// don’t allow visitors to switch to satellite view
 	var map = new google.maps.Map(document.getElementById('map-canvas'),
	mapOptions); // create a new map with the settings in “mapOptions” and then attach it to the DIV element called “map-canvas”


google.maps.event.addDomListener(window, 'load', initialize); // when our document loads, initialize the map creation

Save the file as “CCHS-HTML.html” and open this local file in a browser. You should see something like this:


Loading in the GeoJSON data

Upload your GeoJSON file to your webserver. Then, right after the HTML line where the “map” variable is set, and the Google Map is created, add in the following line:

map.data.loadGeoJson( "http://URL of your GeoJSON file.geojson");

This line will load in the GeoJSON file from your server. If your aim is to view the map on your local computer, try to put the .geojson file in the same directory as your HTML file, and provide the filename instead of a full URL. Make sure to open the HTML file in Firefox, as Chrome has strict cross-origin policy limitations (http://en.wikipedia.org/wiki/Cross-origin_resource_sharing ). If this doesn’t work, you will need to run a local server like XAMPP ( https://www.apachefriends.org/index.html) to serve up the file locally.

Setting the look of our Health Regions

The outlines and positions of our Health Regions are now inside the map – but there is no way of seeing them. In this section, we will outline each area and make it visible on the map. For a detailed explanation of how to do this, refer to the Google documentation at https://developers.google.com/maps/documentation/javascript/datalayer .

Still within the “initialize” function, and after the map.data.loadGeoJson line, we insert the following code:

google.maps.event.addDomListener(window, 'load', initialize);
 	var fcolor = "#ffffff";
 	return {
 		fillColor: fcolor, 					// the polygon fill
 		strokeWeight: 1, 					// width of the border line
 		strokeColor: '#afafaf', 	// color of the HR border
 		fillOpacity: 1, 					// how opaque is the polygon fill? 1 means totally opaque.
 		strokeOpacity: 1, 					// how opaque is the border line
 		zIndex: 0 							// a thorough explanation of what this is http://www.smashingmagazine.com/2009/09/15/the-z-index-css-property-a-comprehensive-look/

For a full reference of what you can control about the look of the Health Regions, see the document at https://developers.google.com/maps/documentation/javascript/3.exp/reference#Data.StyleOptions .

Your map should now look like this:


Styling the Health Regions to highlight when moused over

The next step is to add a dynamic behaviour to our HRs – where each time your mouse hovers over one, it’s name and “Sense of Community Belonging” value is displayed.

First, we add the below line after the above block of code. This code says that whenever we hover over a HR with the mouse, its border will become thick and black. Hovering away will revert the style to our default settings from the previous block of code.

map.data.addListener('mouseover', function(event) {
		{strokeWeight: 2, strokeColor: 'black', zIndex: 1}
map.data.addListener('mouseout', function(event) {

When you hover over an area, the result will look like this:


The next step is to display the name of the Health Region, and the Community Belonging attribute on mouseover. We grab the HR name and attribute from the GeoJSON file using the getProperty method. We display the information inside an InfoWindow object that appears at the location of where our mouse meets the Health Region’s boundary.

Use the following code. Note that the scope of the variable “infoWindow” is outside the two event functions.

// a popup with the Health Region name and the score for Sense of Community Belonging
 var infoWindow = new google.maps.InfoWindow({
 	zIndex: 2
 map.data.addListener('mouseover', function(event) {
 	map.data.overrideStyle(event.feature, {strokeWeight: 2, strokeColor: 'black', zIndex: 1});
 	var healthRegionName = event.feature.getProperty('ENG_LABEL');
 	var communityBelonging = event.feature.getProperty('CCHS-com_1') === null ? "(data missing)" : event.feature.getProperty('CCHS-com_1') + "%";
 	infoWindow.setPosition( event.latLng );
	infoWindow.setOptions( {
 		pixelOffset: {width: 0, height: -3}

 		"Health Region: <b>" + healthRegionName + "</b><br />" + 
 		"Community belonging: <b>" + communityBelonging + "</b>"

 map.data.addListener('mouseout', function(event) {

Mousing over will now look something like this:


Note that the pages at https://divideandconquer.se/2011/09/15/marker-and-polygon-tooltips-in-google-maps-v3/ and https://developers.google.com/maps/documentation/javascript/infowindows were instrumental in creating this code. My code isn’t perfect – it flickers when the mouse hovers over the InfoWindow, and the “pixelOffset” setting is a workaround to make the flickering less noticeable.

Creating the Choropleth coloration

At this point, you are ready to give your choropleth that characteristic coloration that fills each Health Region with a color that corresponds to the percentage of residents who feel intense community belonging.

We will rely on QGIS to tell us which “community belonging” values should correspond to which colors.

Starts by opening the QGIS file that contains the original, unsimplified vector map and the .CSV join. This is the file that ends with the extension “.qgs”. Right click on the layer with the vector map and click “properties”.


Click on the “Style” selection on the left, and select “Graduated” in the dropdown that appears. Now we will be able to set the color of each Health Region according to discrete categories that we’ll define. Next, in the “Column” dropdown, select the field that contains the numeric “Community Belonging” score.


Next, go to “Color ramp” and select the graduated green ramp called “YlGn7” – this will be our chosen color for the HRs. Follow this up by going to the “Classes” field and choosing the number of groups you’d like to break the HRs into. In the example, I chose 7. Finally, in the “Mode” field, we choose the method of grouping – I chose “Quantile (Equal Count)” method. The quantile method tries to distribute the groupings in such a way that there is an equal number of Health Regions in each one of the 7 groups.


A list of colors and intervals will appear in the large blank area underneath the settings. Take a screenshot or record these intervals – you will need them shortly.

For now, press OK. Notice that now, in QGIS, the health region areas are colored according to their Community Belonging values:


Adding color to the Google Map

Now that we have the numeric intervals which will split our HRs into 7 color groups, we can begin coloring the polygons on our Google Map.

Remember our “map.data.setStyle” function which we used to style each Health Area? Let’s modify it to look like this:

map.data.setStyle(function(feature) {
 	var belongingScore = feature.getProperty('CCHS-com_1'); 
 	var fcolor = ""; // polygon fill color
 	switch (true) {
 		case ( belongingScore == 0 || belongingScore === null ): // in case of no value
 				fcolor = '#d4d4d4'; break;
 		case ( belongingScore <= 62.1 ): fcolor = 'one'; break;
 		case ( belongingScore <= 67.3): fcolor = 'two'; break;
 		case ( belongingScore <= 69.8 ): fcolor = 'three'; break;
 		case ( belongingScore <= 71.5 ): fcolor = 'four'; break;
 		case ( belongingScore <= 74.6 ): fcolor = 'five'; break;
 		case ( belongingScore <= 77.9 ): fcolor = 'six'; break;
 		case ( belongingScore <= 87.3 ): fcolor = 'seven'; break;
 		default: fcolor = '#d4d4d4'; break;
	return {
 		fillColor: fcolor,
 		strokeWeight: 1,
 		strokeColor: '#afafaf',
 		fillOpacity: 1,
 		strokeOpacity: 1,
 		zIndex: 0

What is happening is that this function gets called for every HR. We fetch the community belonging value that’s associated with it, and we use the ranges that we got from QGIS to color the polygon with color “one”, “two” and so on. For HRs that do not have a community belonging value, we set the default color to #d4d4d4′.

Now, we need to set valid HTML colors for each of our HR groupings. We can’t just use values like “one” and “two”.

There are two great tools for generating a valid list of 7 colors. My favourite is “Colorpicker for data” ( http://tristen.ca/hcl-picker/#/hlc/7/0.68/233B30/F4EB89). Another very popular tool is ColoBrewer (http://colorbrewer2.org/). Why are there specialized tools for choosing choropleth colors? The reason is that breaking out color gradations by their numeric HSV colors does not result in colors that are perceived as being evenly spaced out. This article explains the concept thoroughly: http://vis4.net/blog/posts/avoid-equidistant-hsv-colors/#picker .

In the “Colorpicker for data” tool, I requested 7 discrete colors and chose the colors that are shown in this screenshot:


You can also preview how your chosen colors look on a sample choropleth by clicking on the “Visualized” tab at the top.


Enter the HTML color values into your code:

	case ( belongingScore <= 62.1 ): fcolor = '#F4EB89'; break;
	case ( belongingScore <= 67.3): fcolor = '#C4CE7B'; break;
	case ( belongingScore <= 69.8 ): fcolor = '#99B16E'; break;
	case ( belongingScore <= 71.5 ): fcolor = '#749361'; break;
	case ( belongingScore <= 74.6 ): fcolor = '#547553'; break;
	case ( belongingScore <= 77.9 ): fcolor = '#395842'; break;
	case ( belongingScore <= 87.3 ): fcolor = '#233B30'; break;

Save, and your map will now appear something like this:


Have a look at the final result as this full webpage with the finished Google Maps Choropleth.

Or, interact with this small version of the map:

Extra Notes

If you are thinking of visualizing the actual Canadian Community Health Data

The instructions in this document were meant to explain how to create a general choropleth. There are a few discrepancies between what you read, and what you need to actually visualize the CCHS data.

For starters, there are several hierarchical levels of Health Region. There are situations where a parent region contains full data, but is not found on the map. Instead, its child regions will appear on the map – but will have incomplete data in CANSIM. (See NS region 1230 and its 2 children as an example: http://www23.statcan.gc.ca/imdb/p3VD.pl?Function=getVD&TVD=139942&CVD=139944&CPV=1230&CST=25022013&CLV=2&MLV=3 )

Next, there is the matter of child Health Regions (like individual large cities) which appear in the Excel CCHS data, but do not have a boundary defined in the MapInfo file from Statscan. This has been discussed earlier, and you will have to remove regions that can’t be visualized. If you leave these un-visualizable regions in your CSV attribute file, then those “Graduated” color ranges will be incorrect. This is what happens in our example – our color groups are broken into 7 buckets with an equal number of HRs, but some HRs can’t be visualized, so the map will not actually show an equal number of regions in each color range.

If you’d like to hide distracting map areas

In our example, we are focusing on Canada and seeing the USA on our map is distracting. It turns out that it is a herculean challenge to gray out the regions we are not interested in, in Google Maps.

There is a tool that can help you outline a region, and gray out everything around it: http://maps.vasile.ch/geomask/

There is a great article with examples as to how to do this another way, by Will Cadell: http://www.sparkgeo.com/labs/focus/

And finally, a discussion that might help: http://stackoverflow.com/questions/26538676/how-to-invert-kml-so-that-area-outside-of-polygon-is-highlighted


Thanks go to Michael Kuzmin (kuzmin.ca) for fact-checking this post.

Copyright © 2020 Jacob Filipp

Theme by Anders NorenUp ↑