The thesis… It’s complete!

RecommendedApproach

My final recommendation to the client after evaluating the prototype

Here’s the abstract from my paper titled ONBOARDING XMOOC PROJECT TEAMS:
Designing learning for professional development.

Purpose: The goal was to design and test a best-practice Onboarding approach, informed by literature and an instructor survey, to address challenges in executing MOOC projects,
and to improve the Onboarding experience for MOOC instructors and project teams.

Theory: The author compiled challenges and best practices into the ADDIE framework as
inspiration for selecting critical learning objectives for an Onboarding curriculum,
employing the 70-20-10 model (McCall, Lombardo, Lombardo, & Morrison, 1988).
Iterative design techniques were informed by thoughtful interaction design
(Stolterman & Löwgren, 2004). Evaluation of a beta prototype was conducted using
the framework proposed by (McKenney & Reeves, 2012).

Method: The project team previewed the alpha prototypes of a MetaMOOC learning design. The beta prototype was developed with indicative content and formally evaluated with five experts using qualitative interviews. Coding of the feedback included
categories to inform future iterations.

Results: Evaluations of the beta prototype learning (formative) objectives and content provided showed these to be largely appropriate with suggested improvements. The design (summative) objectives were proven to be unrealistic. The author recommends a more comprehensive curriculum as well as project management toolkit, spanning the entire project lifecycle.

Here is the final document if you wish to read it!

PHEW! Now I can enjoy life just that teensiest bit more… and decide what to learn next. MWAAAAAHHHHAAAAHAAAA…

TIA 132 Digital Tools for Communication and Learning

NewUserOnboarding1stIteration

First iteration of an “Onboarding” tutorial targeted to new senior users of Facebook

This course involved a group design project. We were assigned to work with a research team looking at “Digital Seniors” and their use of social media. Due to limited time for the project, the prototypes were not tested on actual seniors, however we did our best to incorporate design touches inspired by the literature. To prepare ourselves for the design project, we read selected literature in the domain and informally interviewed senior Facebook users in our social circles.

The scope of our design project was not defined for us when we started. To keep scope manageable, we considered Porter’s (2008) framework for designing social applications. The user is confronted with choices during the sign-up, first-time use, and ongoing engagement phases of using a new application. We chose to focus initially on the sign-up and first-time use scenarios.

We defined our design objectives as:

  1. Address typical UI challenges faced by seniors e.g.
    1. Reduce the text and replace with icons if possible
    2. Avoid slang or informal wording
    3. Use very clear fonts and colour schemes, avoiding too many similar colours
    4. Do not use pop-up windows
  2. Employ elements of Multi-layered Design when appropriate, specifically:
    1. Present only the most popular or interesting features and coach the user how to get started
    2. Make the revealed features easy to find later
    3. Coach the user on how to explore the platform when they want to learn how to do something new
  3. Consider Porter’s (2008) framework and limit the designs to the sign-up and first-time use scenarios.

Our first round of storyboarding produced several ideas: a simplified account sign-up, a simplified home page, and a new user “onboarding” tutorial which would introduce a new user to the features seniors are most interested in. As a design team we reviewed this first artefact together with our instructor. As a team we decided to go with the onboarding idea but incorporate ideas from the other two, because realistically it would be unlikely that Facebook would ever open the API enough to redesign the account sign-up or the home page. The third idea looked like this:

NewUserOnboarding1stIteration

First “operative image” of the onboarding concept

The second design iteration we established some clear assumptions, specifically that the user was using a mobile phone device, the tutorial would be presented upon first login, and it would end in the user being delivered to their home page. A set of tickboxes indicated progress through the tutorial, and a section on privacy educated the user on things to consider when posting content. Additionally, one group member wanted to experiment with adding a friendly coaching character to the tutorial as this seems to be an interesting technique employed even on Swedish government websites. This version looked like this:

NewUserOnboarding2ndIteration

Second iteration operative image

We reviewed this prototype together with the expert research team and one assistant. The research design team reacted negatively to the Smiley character and discussion of “fun” things one could do in Facebook. The lead researcher remarked that older men would likely find this type of approach too frivolous and likely would not want to continue with the tutorial. Based on their experiences so far working with seniors, the research experts’ impression was that seniors have fixed ideas about what the mobile phone is used for, and such a character was not serious enough. The expert team recommended creating a genderless, ageless character similar to the Microsoft Clippy character instead. Furthermore, they recommended making it obvious how to re-take the tutorial at some later stage, and to take them to the Homepage at the end of the tutorial with the icons activated for the concepts discussed.

In the third iteration, we added a new character based on the Facebook icon, Fifi. Fifi focuses less on “fun things you can do” and more on how to educate the user on “useful” things to do on the platform. The screen flow mostly stayed the same but the team added some colour. Here’s how it looked:

NewUserOnboarding3rdIteration

Third iteration operative image

We then previewed the screen flow to our classmates and requested their feedback.  They seemed sceptical that senior users were actually using Facebook on their phones, whereas our research showed seniors mainly use touchscreen devices for accessing Facebook.

We concluded it would be wonderful to actually show some senior users our ideas and get feedback from them! The team cited our biggest lessons learned as, first, ask the client questions until you understand the expectations, and second, just commit “something” to an operative image to get that creative tension started. Hopefully our designs are inspirational to the expert research team as they continue their grant work.

Thanks a lot to my classmates for a valuable collaboration. The final paper describing the project is available here.

Reference:

Porter, J., 2008. Designing for the Social Web. Berkeley: New Riders.

TIA 130 Research Methods

Bar chart of data preprocessing results

89% of the data was screened out of the analysis!

The project for this course was to design a research project and perform a structured, yet not systematic literature review for the paper. I called a friend who owns a local start-up in town and asked whether there was any data I could analyse for him. He wanted better insights into the app store reviews to help him and his team shape their product roadmap.

His initial guidance to me was quite general: the study should be “exploratory in order to learn what kind of features a good review are bringing up, and to find correlations between certain product features and satisfaction.”

My research questions first sought to find any best practices in terms of analysing app store reviews, then dug into specifically which questions I wanted to answer with the statistics I would come up with.  I received a year’s worth of Apple and Google app store data  to analyse.

My findings were proprietary so I can only generalise here and I can’t upload the entire paper:

  • Most of the time spent for the data analysis was in the “data preprocessing” activity, namely cleansing the data according to given criteria, weeding out suspected “spam,” picking out when the reviewer might be discussing a certain feature, and assigning this portion of the review to that feature. I was only one researcher and did not have the benefit of any software to help me with this.
  • 15% of Apple data and 89% of Google (Android device) data was excluded during the preprocessing phase.
  • The client’s app store data was consistent with what was reported in the literature: Generally the ratings were four- or five-stars, longer reviews tended to be negative, and Apple reviews were of a higher quality (i.e. much less suspected spam or other reasons to screen out) compared to the Google reviews.
  • Apple customers mentioned more specific features in their reviews but Google customers offered suggestions more often.
  • I created 42 possible features to code against. Apple users had about 85% of their comments toward six features, and Google Android users did the same over the top 14 features.

I delivered basic descriptive statistics to the client team (i.e. means and standard deviations) for each research question in a presentation and they found it really helpful. Future research in this topic should definitely explore automation and additional researchers to help code the feature mentions neutrally.

 

PDA 676: Behavioural “Nudges” in Learning Technology

EducationNudgingMaturityModel

My nod of respect to Josh Bersin: my Behavioural Nudges in Education Maturity Model

For the course on Digital Literacy, we had to do another literature review paper on a topic of our choice. Lately I’ve been intrigued by the idea of Nudge Theory popularised by Drs. Richard Thaler and Cass Sunstein. Essentially the idea of a “Nudge” is to introduce ways to incentivise people to make a choice more beneficial to him or her. These can take many forms, which I won’t summarise here. My favourite example is a school cafeteria which provides precut fruit and places it at eye- or reach-level, whilst sugary dessert is placed in a harder-to-reach spot. The fruit is easy to obtain and has the benefit of already being cut up– easy to eat! The sugary dessert is still available but you have to work just that tiny bit harder to get it.

I looked at what behavioural nudges are available in learning technologies, and what type of behaviour they might be trying to encourage in students. I then categorised the examples into the proposed Maturity Model above. I found that there isn’t much available, and that most research in this area is coming out of the United States. Furthermore that most institutions of higher learning struggle to even implement the basics with their Learning Management Systems (LMS) (Level 1). There are some very interesting programmes out there, though, getting into using more data, combining rudimentary predictive analytics with personal coaching to help university students successfully graduate (Level 3).

I didn’t find any case studies at the highest level of maturity, true predictive nudging. Nudging strategies at this level, were they to exist, would acknowledge that student success is not solely reliant on academic achievement. Furthermore, the data to flag correlations lives in other systems outside the LMS and beyond algorithms based on historically successful students. I propose universities look at the student as a whole person, factors about how the university is organised, and better data about why students drop out as potential additions to a predictive analytics algorithm and nudging model.

Read my final paper here.

 

PDA 675: Factors Driving Business Value in Learning

This paper is my final assignment in the course on learning theories. We had to do a literature review on any topic of our choice. I’ve always been interested in how to achieve more value in learning interventions and the programme is centred on IT and Learning. I reviewed research around the Technology Acceptance Model (popular in the IT world), and how factors of each seem to affect the levels of learning effectiveness evaluation proposed by the the Kirkpatrick model (popular in corporate learning).

TAM_Kirkpatrick_Relationship

Learning leaders have a wealth of indicative variables available to leverage in their design and delivery of learning programmes. Some recommendations one can take from them in terms of learning technology:

  • An impression of user-friendliness is important for a positive reaction, but it is not as important as Perceived Usefulness. Look to how to maximise the organisational and social factors which can convince learners that a training intervention will benefit them, and they will tolerate useability challenges to get their expected reward.
  • One cannot assume that even computer-savvy learners are comfortable with learning systems, and Computer Self-Efficacy powerfully predicts whether they will have a positive experience. Although it’s always desirable to make a system as user-friendly as possible, some aspects of a user interface may be outside the organisation’s control. It is therefore sensible to always include content orienting the learner to the systems they will need to master in order to complete their training.
  • If a learner who has completed a training intervention still believes the subject matter was difficult, it is highly unlikely the training intervention will have been worth the investment. A question on the end-of-course quiz will help trainers to identify these learners and offer additional coaching.
  • Organisational and social support variables have a consistent positive impact all the way through to transfer of learning on the job. As a learning professional one can’t micromanage every single line manager or colleague, but one can require proof of learner conversations with their leadership as part of the instructional design. For example, a standard guideline for a post-training one-to-one with the manager could include planning for opportunities to practice the new skill, to share with the rest of the team, and defining how the team can support the learner going forward.

One disappointing aspect of all the literature in this domain was that it was based on self-reporting by learners. No study endeavored to compare user acceptance or transfer using empirical data based on behaviour.

Here is my final detailed paper.