Sustainable Upskilling Solutions (SUSs): the Next Generation of eLearning Assessing

Assessments or quizzes deployed via LMSs are topline and superficial in nature. This Sustainable Upskilling Solutions (SUSs) resolve by incorporating a comprehensive assessment suite within their technology stack.

“No eLearning course – in fact, no course that aims at transferring knowledge to learners, is truly complete until it is accompanied by provisions to assess the outcomes of learning”, so Ayesha Habeeb Omer states in her article “Creating Successful Assessments For eLearning”. (https://elearningindustry.com/successful-assessments-for-elearning-part-1)

Despite the significance of assessing, the actual state of eLearning assessing has regrettably been in limbo for close to two decades and this is now only changing through the advent and deployment of “Sustainable Upskilling Solutions”. SUSs have a somewhat different approach to learning and development compared to traditional LMSs in that they provide organisations with a mechanism to train and assess the entire workforce easily and, most importantly, sustainably.

The Assessing (and Reporting) Problem:

eLearning assessing has by and large consisted of SCORM quizzes with multiple-choice questions comprising of a single and/or multiple answers. Some authors place impetus on matching question types in which the learner is tested on the relationship between two sets of data. Other question types comprise entering words in a blank space, dragging and dropping of word or phrases, drop-down menus in which the correct answer must be selected as well as word builders, akin to a crossword puzzle.

As if the most prevalent pitfall of assessing was not sufficient – namely ambiguous questioning – some of the above listed question types truly open a pandora’s box: for example, if a learner must enter a word in the blank space and the learner misspells the word then the supplied answer will be marked as incorrect – even if just a single letter is erroneously misplaced! This question type is certainly impartial towards dyslexic learners! Ditto for word builders: while gamification can certainly have its place within a well-defined eLearning mix, gamification in assessments can be a distraction and unnecessarily complicate the desired objective. After all (and in truth), one is assessing adults and not children…

And that, in short, sums up the SCORM assessing ability. The good news is that these are just the frontend limitations; the backend limitations are unfortunately even worse: SCORM, primarily due to its container-based architecture, makes provision for capturing only course completions, time spent in the course, milestone progression, pass / fail and single scoring, which of course significantly curtails any meaningful reporting. SCORM’s age is showing and newer standards, specifically xAPI (also referred to as “Experience API” or “TinCan API”) as well as CMI5 are hailed as SCORM’s successors. Precisely, these newer standards allow learning departments to track (and thus report) on just about any activity and learning experience that one can observe, which includes completing activities and simulations, performing job functions, producing work deliverables, completing a Khan University course and so on, which is done via an externally managed Learning Record Store (LRS). While xAPI (and CMI5) appear as the holy grail when it comes to meaningful reporting as a result of better data management, the reality thereof is somewhat different.   

Per the xAPI specification, a LRS is an external server that is responsible for receiving, storing, and providing access to Learning Records. In other words, a LRS sits external to a LMS and is in essence a repository into which xAPI statements are recorded. Said xAPI statements can be expressed in the form of “actor verb object context”. To illustrate: “Jane failed Induction Training 101 with John Instructor” (actor = Jane, verb = completed, object = Induction Training 101, context = John Instructor). LRS’s come standard with an initial set of verbs (such as “attended”, “completed”, “attempted”, “passed”, “failed” etc.) and there is no limit to the number of verbs that a training department can create. The context is also not limited in terms of definitions. These “actor verb object context” are subsequently imported to the LRS from the LMS and subsequently exported to a report generator, for example, from which reports and analytics can be drawn up – on paper this seems straightforward and benign, however for a common L&D department this is truly complex and something that warrants external involvement (in short: delays and complications).

While xAPI (and in turn LRS’s) are a significant step up from SCORM based reporting, the fact that each training department can create its own verbs and contexts means that common denominators are subjugated and consequently comparability eroded. While generating meaningful, comparable and objective reports is technically a possibility, any statistician will confirm that the success and outcome thereof is directly proportionate to the quality of data from which these reports are generated; and if everyone is in essence free to do what they like, which they can and do as a result of the aforestated verbs and contexts, the adage of “too many chefs in a kitchen” rings very true: the end result could very well end up in a calamity translating to meaningless, non-comparable and subjective reports.

The Sustainable Upskilling Solution

There is however a solution to the assessment problem: Sustainable Upskilling Solutions or SUSs. In addition to just taking content to learners, SUSs fundamentally quantify the value that organisations obtain through their training efforts, typically in line with a training evaluation model such as the CIPP (Context, Input, Process and Product) Evaluation Model, The Phillips ROI Model or the Kirkpatrick’s Training Evaluation Model.

SUSs make provision for all common “container based” content formats, which extend to SCORM 1.2, SCORM 2004, AICC, HTML5, xAPI and CMI5 – in essence the exact same as a “traditional” LMS. In addition though, SUSs also natively allow for Common File Formats to be disseminated, which include the likes of Microsoft Word, PowerPoint, Adobe PDF, interactive PDFs, MP4 video, MP3 audio, Adobe InDesign (INDD), YouTube, Vimeo, ThinkLink and even external links. More about content formats in this article here. (https://elearningindustry.com/sustainable-upskilling-solutions-outclass-lms-content-options)

The rubber really hits the road when one considers how SUSs handle assessing: SUSs contain a comprehensive assessment suite that can be leveraged onto existing container based (SCORM, xAPI etc.) courses – this is truly unique! The key here is that the assessment sits outside of the container-based content so that meaningful, structured and granular data can be extrapolated and fed back into the reporting subsystem. This is distinct in that the assessment suite allows the learner to benefit from the (existing) course content irrespective of the output format. Better still, the reporting that follows is truly next-gen given that the standard reporting suite is truly comprehensive and goes well beyond that of a “traditional” LMSs reporting capabilities – and all this without having to get IT involved, unlike a LRS! Should the standard reporting environment not be adequate, SUSs allow also for custom reporting that can even leverage data from external systems. Reporting, though, is subject for another article.

The foundation of SUS assessment suite’s capability lies within the Question Bank functionality: the Question Bank is a question repository into which questions are loaded, defined and easily grouped. Subsequently, the course author can stipulate whether questions should be presented in sequential or randomised order, whereby the author can also stipulate to present a specific number of random questions from a question group within a Question Bank. For example, draw two questions from Group A (comprising 10 questions), four questions from Group B (comprising 15 questions) and six questions from Group C (comprising 20 questions) and so on. The net result is that each learner will receive a completely different assessment however the results remain comparable due to the grouping functionality. Skills gap analysis, anyone?

The assessment suite (including the generation of Question Banks, questions as well as the management thereof) is managed on a SUS directly without the need for external assessment authoring tools. SUSs make provision for numerous assessment parameters, which include multiple-choice questions comprising single and/or multiple answers (that may contain images and videos as part of the question), free text answers, observational/practical assessments, surveys as well as document or file submissions. From a question definition perspective SUS allow for further customisation such as weighting of answers as well as negative scoring, should this be required. Assessments may be timed or untimed whereby timed assessments may even be proctored.

Proctoring, or the learner authentication via face recognition, is a relatively new phenomenon in eLearning however one that was somewhat questionable in the past given that the learners’ data was (often unreliably) shipped to external parties who performed the authentication process. With SUS, this level of artificial intelligence happens within the system itself thereby maintaining a high level of confidential data fidelity.

SUSs typically also include assessment suite mainstays, such as moderation, certification, badges and learner to author feedback options, to name but a few.


In closing, SUSs blur the lines between a number of platforms that were previously freestanding, specifically in this instance between a “traditional” LMS and dedicated assessment tools. The net effect is more flexibility for assessment authors, an elevated and more professional assessment experience for the learner and a better return for organisations deploying the SUS due to the granularity, insights and measurement that SUSs seamlessly deliver.