The use of e-assessment is increasing rapidly in the VET sector in Australia. Recent national benchmarking surveys revealed that over 40 per cent of RTOs and more than 60 per cent of teachers and trainers are using some form of e-assessment. E-assessment is the use of information technology for any assessment-related activity. [1]
These words come from a research report titled, ‘E-assessment guidelines for the VET sector’. It was produced on behalf of the National Quality Council and the Australian Flexible Learning Framework with funding provided through the Australian Government Department of Education, Employment and Workplace Relations and state and territory governments.
Application of e-assessment
This document used the elements of the TAEASS402A Assess competence unit of competency to identify the application of technology. [2] It should be noted that the current TAEASS402 unit supersedes and is deemed equivalent to the TAEASS402B unit, and TAEASS402B unit supersedes and was deemed equivalent to the TAEASS402A units.
Therefore, the nexus between the skills of an assessor conducting assessment and the use of technology was established more than 10 years ago.
Compliance with the principles of assessment
The ‘E-assessment guidelines for the VET sector’ explains how e-assessment can comply with the principles of assessment. [3]
Compliance with the rules of evidence
The ‘E-assessment guidelines for the VET sector’ explains how e-assessment can comply with the rules for evidence. [4]
Candidate authentication and security
The ‘E-assessment guidelines for the VET sector’ explores how e-assessment must provide for candidate authentication and the security of both the assessment process and assessment data. [5]
In conclusion
What’s happened in the ten years since the ‘E-assessment guidelines for the VET sector’ was published?
More RTOs use technology to support their assessment processes and activities
Most trainers and TAFE teachers have used some form of e-assessment
Technology has become more accessible and intuitive to use
The use of an LMS has become ubiquitous.
The ‘e’ in e-assessment no longer stands for ‘electronic’, instead it has become to stand for ‘everyday’ (or ‘everywhere’).
The Education IRC wants to implement a new TAEASS404 Conduct e-assessment of competence unit of competency. This new unit duplicates what is adequately covered the the current TAEASS402 Assess competence unit of competency. As an example, the following table illustrates the similarity by comparing performance criteria.
If you would like to view the complete mapping of performance criteria, please refer to pages 33 to 37 in following document:
Do you think that a duplicate unit of competency is needed?
Using technology to support assessment processes and activities is not new. The current TAE Training Package released in 2016, and the previous TAE10 Training Package, have been developed with consideration of technology being used when conducting assessments. The only difference between now and ten years ago is that technology has generally become easier to learn and use. Technology is a foundation skill required to be an assessor but I do not believe we need to develop and implement a duplicate unit of competency.
Do you think that the Education IRC should scrap their idea of implementing the proposed TAEASS404 Conduct e-assessment of competence unit of competency?
References
[1] Australian Flexible Learning Framework and National Quality Council, E-assessment guidelines for the VET sector (page 3), 2011
[2] Australian Flexible Learning Framework and National Quality Council, E-assessment guidelines for the VET sector (pages 7 to 10), 2011
[3] Australian Flexible Learning Framework and National Quality Council, E-assessment guidelines for the VET sector (pages 19), 2011
[4] Australian Flexible Learning Framework and National Quality Council, E-assessment guidelines for the VET sector (pages 23 to 25), 2011
[5] Australian Flexible Learning Framework and National Quality Council, E-assessment guidelines for the VET sector (pages 21 and 22), 2011
As part of the TAE Training Package review, two new e-units has been drafted. PwC’s Skills for Australia is currently seeking our feedback about these two draft units of competency:
TAEASS404 Conduct e-assessment of competence (draft)
TAEDEL405 Plan, organise and facilitate e-learning (draft)
I have published two previous articles that presents a case against further developing and implementing these two units:
In this article, I provide a background to the development of these units of competency. I will also highlight the flawed process and missed opportunities.
Background to the E-assessment project
Case for Change
The Education IRC developed a Case for Change that covers:
Holistic review of the TAE Training Package: a holistic review of six qualifications and 55 units of competency in the TAE Training Package.
E-assessment project: an urgent response to address an identified gap in e-assessment in the TAE Training Package involving the development of two new units of competency.
The Case for Change was approved at the AISC meeting on 19 August 2021. [1]
E-assessment project
The following is an extract from the Case for Change. [2]
The AISC approval was for the creation of two new e-assessment units of competency. One unit of competency was to cover ‘designing e-assessment’, and the other to cover ‘facilitating e-assessment’ (as per Case for Change, see below).
The draft units of competency were released on 22 October 2021. Instead of two e-assessment units of competency, there was one e-learning unit and one e-assessment unit. This is a significant alteration to the project scope that had been approved by AISC. I have not seen any communication acknowledging or explaining this change of scope. This raises serious concerns about the decision-making and integrity of the Education IRC.
Why was the scope changed?
Has AISC approved the changed scope?
Why hasn’t the changed scope been communicated and justified?
Are members of the Education IRC aware of the changed scope? If so, why haven’t they ensured that the change be communicated?
The Case for Change identified an ‘e-assessment’ skills gap. AISC approved the creation of two new e-assessment units of competency. The Case for Change did not propose the creation of an e-learning unit of competency.
The Education IRC had considered VET sector stakeholder feedback and decided that two new e-assessment units of competency were needed. This was presented to the AISC, and subsequently approved.
Was the original e-assessment skills gap false? Therefore, the ability of the Education IRC to analyse VET sector stakeholder feedback and make decisions is faulty.
Was the original solution to create two e-assessment units of competency wrong? Therefore, the ability of the Education IRC to develop solutions is questionable.
Did the Education IRC feel compelled to create two new units of competency? Therefore, did the Education IRC think it was better to create two units of competency than admit they got it wrong and only develop one e-assessment unit of competency?
The Case for Change does not justify the creation of an e-learning unit of competency. And the creation of a ‘new’ e-learning unit of competency is not required. The unnecessary creation of an e-learning unit of competency has been discussed in my previous article titled, ‘Do we need a new e-learning unit of competency?‘.
The Case for Change states that e-assessment requires a significantly different set of skills and knowledge to traditional assessment practices.
What are ‘traditional’ assessment practices?
Are ‘traditional’ and ‘contemporary’ assessment practices the same thing?
Are the skills and knowledge to conduct e-assessment ‘significantly’ or ‘slightly’ different to those needed to conduct ‘traditional’ assessment?
The following is an extract from the Case for Change. [2]
Five issues have been listed by the Education IRC. These issues or poor assessment practices may be caused by assessors lacking the required knowledge and skills. However, these issues may be caused by RTO management lacking knowledge or skills, inadequate allocation of resources for conducting quality assessment, or improper implementation of an RTO’s policies, procedures, or systems. Assessors may be falsely identified as the cause of issues, and then it is assumed that the development of knowledge and skills for assessors will resolve these issues. It is a waste of time and money to train assessors if the cause of issues is unrelated to assessor or their knowledge and skills.
The following is an analysis of the five issues.
None of the five issues seem to justify the creation of new e-assessment units of competency, especially, at developing the knowledge and skills of assessors.
The Case for Change was not made publicly available until after it had been approved by AISC. This meant that there had not been any prior opportunity to analyse and comment on the proposal or the justification of what was being proposed. The following is an extract from the Case for Change. [2]
A ‘strong evidence of industry support for the Case for Change’ is not the same thing as having strong evidence for change. The following peak bodies are listed as giving their support, or agreed with, the Case for Change:
TAFE Directors Australia
Enterprise RTO Association
Australian Industry Group
Independent Tertiary Education Council Australia
Australasian Curriculum and Certification Authorities
Regulators and state and territory training authorities (STAs).
Sadly, it seems that no one with current or detailed knowledge of the ‘contemporary assessment environment’ were consulted. The justification by the Education IRC for creating two new e-assessment units of competency is flawed. It is regrettable that no one consulted from the peak bodies, regulators, state and territory training authorities, or AISC knew enough to disagree with the false justifications and the proposed solution. The justifications for the ‘E-assessment project’ have already been proven as false because the project scope has been altered within two months of it being announced. This is evidence that the feedback from VET sector stakeholders, proposal by the Education IRC, and approval by AISC have been faulty.
Recent announcements [3] to scrap the current training package development and endorsement process will mean that the current people will have no accountability in the future. The current people involved will not have to face the consequences of getting things wrong or wasting government funding. I think it would be desirable to pause the current review of the TAE Training Package, including the ‘E-assessment project’, until the new Industry Cluster has been established.
Scope of the E-assessment project has changed
It is probably too late to stop the progress of the ‘E-assessment project’ that has morphed into the ‘E-learning and E-assessment project’. It should be noted that the Case for Change did not propose the creation of an e-learning unit of competency. This would indicate that the training product consultation and development process is flawed.
The scope of the ‘E-assessment project’ has been altered since it was approved by AISC. The following table shows that the proposed ‘Design e-assessment’ unit of competency has been dropped. It also shows that a ‘Plan, organise and facilitate e-learning’ has been created.
A ‘Design e-assessment’ unit of competency was proposed by the Case for Change but has not been pursued. I am unaware of any acknowledgement or communication about why it has silently been removed for the ‘E-assessment project’ scope. Instead, a new unit has been created that had not been proposed by the Case for Change. A new ‘Plan, organise and facilitate e-learning’ unit of competency has been drafted. Again, I am unaware of any acknowledgement or communication about why it has silently been added to the ‘E-assessment project’ scope. Evidence that supports the creation of a new e-learning unit of competency is unknown. It had not been identified by VET sector stakeholder and had not been presented to AISC for approval.
The work roles associated with e-assessment
The following diagram illustrates five work roles that perform tasks associated with e-assessment.
Some tasks performed by a trainer and assessor may crossover with those performed by a resources designer and developer. For example, some trainers and assessors may be required to develop simple learning or assessment resources.
Some tasks performed by a resources designer and developer may crossover with those performed by an LMS administrator. For example, some resource developers may upload resources to the LMS.
Some tasks performed by an LMS administrator may crossover with those performed by a training administrator. For example, some LMS administrator may set up courses and classes.
Some tasks performed by a training administrator may crossover with those performed by a training manager or coordinator. For example, some training administrators may have the ability to assign levels of LMS authority to other people.
The above diagram deliberately has a gap between:
Trainer and assessor / LMS administrator
LMS administrator / Training manager or coordinator
Resources designer and developer / Training administrator.
The crossover between these roles are rare.
The role of LMS administrator
It is important to recognise that LMS administrators are specialists and perform tasks that are not usually performed by trainers and assessors. The most critical role in making e-assessment work is the LMS administrator. The knowledge and skills of the LMS administrator is far beyond what is required by users of the LMS, such as, trainers and assessors. The Education IRC may like to consider developing a qualification or skill set for the work role of LMS administrator. Some trainers and assessors may like this as a career pathway – but many trainers and assessors would have no desire to deeply immerse themselves into the world of LMS technology.
Users of technology
Why was a new e-assessment unit required for trainers and assessors? Was it because they cannot conduct assessments? Or was it because they cannot use the technology? Using an LMS or other technology that supports the assessment process can be treated in the same ways as using Microsoft Word, Microsoft PowerPoint, Google Docs or other technology applications that may be used by trainers and assessors. Technology to support assessment activities is a foundation skill.
The current TAEASS402 Access competence unit of competency addresses the knowledge and skills required by trainers and assessors to conduct assessments. Therefore, the ‘real’ need is to develop the capability to use Moodle, use Canvas, use Job Ready, use other LMS, or use other technology that supports an RTO’s assessment processes. Addressing every possible technology is onerous, wasteful and impractical.
A dilemma for the VET system is whether a person should learn how to:
use all technologies although the VET graduate may never be required to use all technologies
use one technology although the VET graduate may have to learn a different technology when they get a job.
The VET system recognises the need for further learning after a person gets qualified and gets a job. In regard to performing the role of trainer or assessor, most technology used to support assessment processes are relatively quick and easy to learn.
There is no need for the creation of a ‘new’ e-assessment unit of competency for the work role of assessor. The solution could be to add a foundation skill or slightly modify the Performance Evidence requirements for the existing TAEASS402 Access competence unit of competency.
Thinking outside the box
The Education IRC has justified the creation of new e-assessment units of competency because they got feedback from VET sector stakeholders saying there was skill gaps. And the Education IRC has listed issues that they believe create those identified skill gaps. It is a shame that the Education IRC did not explore solutions to develop the e-capability in work roles other than trainers and assessor. This is a shame because the problems associated with the identified issues are unlikely to be solved by developing the skills of trainers and assessors.
A better solution would be to improve the way RTOs deliver the TAEASS402 Access competence unit of competency. Improving the way RTOs deliver the TAEASS402 unit, and more generally the TAE40116 Certificate IV in Training and Assessment qualification (or what replaces it) would have a greater impact on improving the conduct of ‘e-assessment’ and delivery of ‘e-learning’.
The Education IRC has focused on the work role of trainer and assessor. This was the wrong focus. Developing the capability in work roles other than trainers and assessors would have a greater impact on improving the quality of ‘e-assessment’ and ‘e-learning’. The Education IRC seem to be fixated on developing the skills of trainers and assessors.
The following diagram shows various work roles that have an impact on the quality of ‘e-assessment’ and ‘e-learning’ services.
In regard to e-assessment, I think that the capability development of training managers or coordinators, resource designers and developers, and training administrators should occur before skills development of trainers and assessors.
The employment or development of talented LMS administrators is vital for implementing and maintaining quality of ‘e-assessment’ and ‘e-learning’ services. The Education IRC does not seem to be able to think outside the box. The fixation on the work role of trainers and assessors is wrong. The Education IRC, or the VET sector stakeholders that provided feedback, have lacked an insight or understanding about ‘e-assessment’ and ‘e-learning’. Not every problem or issue is going to be rectified by focusing on the skills development of trainers and assessors. My past research into the quality of assessment identified the need to focus on the training manager rather than the assessor. And training is not always the solution.
The following table outlines the existing units of competency for developing e-capability and identify the current gaps in the TAE Training Package.
The Education IRC should investigate the development of TAE qualifications or skill sets for:
LMS administrator
Training administrator
Training manager or coordinator.
If a TAE qualification or skill set is developed for these work roles, it should include the skills and knowledge required to perform work tasks associated with ‘e-assessment’ and ‘e-learning’.
In conclusion
Some people may think my comments are a bit harsh. However, my comments are a critical analysis of the facts (as per the Case for Change). We have been told that feedback from industry has identified a need for two e-assessment units to be created. But we have not be given the evidence that supports the development of these units of competency. And instead of the creation of two e-assessment units, one e-assessment unit and one e-learning unit are being created.
How was the need for two e-assessment units identified?
Who in the VET sector identified the need?
What exactly was the identified need?
When was the need identified?
Why was the scope of the ‘E-assessment project’ changed?
Who in the VET sector identified the need for creating a new e-learning unit of competency?
The Case for Change had not be made public until after the AISC had approved it. The scope of the ‘E-assessment project’ has been changed without any justification. If the Education IRC and their SSO are unable to properly develop two units of competency after more than 12 months, then how can they be trusted to review the entire TAE Training Package within a shorter timeframe.
As part of the TAE Training Package review, a new e-learning unit has been drafted. The title of of the proposed unit is TAEDEL405 Plan, organise and facilitate e-learning.
PwC’s Skills for Australia is currently seeking our feedback about this draft. The following is my feedback about the unit, and I would be happy to hear your comments before I submit it.
There are different types of e-learning. Often, e-learning is referred to being synchronous and asynchronous. The following table broadly describes two types of e-learning.
The knowledge and skills required for facilitating self-paced e-learning (asynchronous) are different to those required for facilitating group-based e-learning (synchronous) . Also, the tasks performed are different. I think it would be best to cover these two types of e-learning by two different units of competency.
Is there a need for a new e-learning unit of competency?
Synchronous and asynchronous e-learning are different tasks, and each type of e-learning requires different knowledge and skills. I recommend having two units instead of one. One unit covering synchronous e-learning, and another covering asynchronous e-learning. Also, delivering synchronous e-learning to an individual is different to delivering synchronous e-learning to a group. Delivery to a group requires a higher level of skills compared with delivery to an individual.
The following is my recommendation. The unit codes and titles should be read as indicative for the purpose of this example.
Note: Learning to use web conferencing technology is relatively quick and easy.
TAEDEL405 Plan, organise and facilitate e-learning (draft)
Is the TAEDEL501 Facilitate e-learning unit of competency being down-graded to become the TAEDEL405 Plan, organise and facilitate e-learning units of competency?
The TAEDEL405 Plan, organise and facilitate e-learning unit of competency seems to be a rewrite of the current TAEDEL501 Facilitate e-learning unit of competency. Therefore, it appears that the TAEDEL501 unit is being down-graded to become the TAEDEL405 unit. However, this is my assumption. Down grading or lowering the skill level of the TAEDEL501 unit seems unnecessary and a waste of time and effort.
Feedback about the elements and performance criteria
The following table provides summary of how the draft TAEDEL405 unit differs from the current TAEDEL501 unit.
Note: The changes to the TAEDEL501 units are trivial or superficial.
Feedback about the foundation skills
A candidate will be required to demonstrate the ability to complete the tasks outlined in the elements, performance criteria and foundation skills (as per the Performance Evidence statement). This makes the foundation skills assessable.
Analysis and comprehensive feedback about the foundation skills would require much more time and effort. However, I believe that a critical analysis of the foundation skills should be done prior to implementation.
Feedback about the performance evidence
The following are the performance evidence.
I have many questions about the performance evidence:
What is one complete program of learning? What is a session? And what program of learning has only three sessions?
What is a synchronous e-learning session? Is it a training session delivered via a web conferencing platform?
Is a group of four students large enough? How many students are normally in a group attending a synchronous e-learning session? Who decide on the size of the group? From my experience, at least eight students would be a more realistic number.
Why are learners being referred to as students? Will this prohibit the unit from being used for community-based training providers, enterprise-based training providers and non-VET training providers who do not have ‘students’?
What is an asynchronous e-learning session? The concept of sessions for asynchronous e-learning seems odd.
Currently, the TAELLN411 unit adequately covers the identifying and addressing LLN (and D) needs. This does not have to integrated into another unit of competency.
I think the scope of the draft TAEDEL405 Plan, organise and facilitate e-learning unit of competency should be split over two or more units. For example, one unit covering the delivery of synchronous learning, and another unit covering the delivery of asynchronous learning. One unit of competency to address the range of e-learning types is too ambitious.
Feedback about the knowledge evidence
The following are the knowledge evidence.
Analysis and comprehensive feedback about the foundation skills would require much more time and effort. However, I believe that a critical analysis of the foundation skills should be done prior to implementation.
The breadth and depth of knowledge required for this unit seems to be at a higher AQF level. And I would like to know, ‘what is the source document for the code of conduct related to e-learning?’
Feedback about the assessment conditions
The following are the assessment conditions.
I am unsure what is meant by an ‘e-portfolio to reflect progress and collect evidence’. I hope the the term ‘e-portfolio’ is removed.
In conclusion
Is there a need for a new e-learning unit of competency?
The draft TAEDEL405 Plan, organise and facilitate e-learning unit of competency duplicate the current TAEDEL501 Facilitate e-learning unit of competency. Therefore, there is no need for this new e-learning unit. The draft TAEDEL405 Plan, organise and facilitate e-learningunit of competency should be scrapped.
Instead of the proposed one new e-learning unit of competency, I would suggest looking at several new e-learning units of competency. For example:
Plan, organise and deliver training to an individual using web conferencing technology
Plan, organise and deliver training to a group using web conferencing technology
Plan, organise and facilitate online and self-paced learning for an individual
Plan, organise and facilitate online and self-paced learning for a group
If the TAEDEL405 unit is implemented, will the TAEDEL501 unit be removed from the TAE Training Package? Is the aim to downgrade the skill level of the current TAEDEL501 unit? (So many unanswered questions.)
As part of the TAE Training Package review, a new e-assessment unit has been drafted. The title of of the proposed unit is TAEASS404 Conduct e-assessment of competence.
PwC’s Skills for Australia is currently seeking our feedback about this draft. The following is my feedback about the unit, and I would be happy to hear your comments before I submit it.
E-assessment may mean different things to different people. I have categorised two different types of e-assessment:
Assessment conducted by technology
Assessment supported by technology
Assessment conducted by technology
Assessment conducted by technology may also be known as computer-based assessment (CBA) or e-marking. This type of e-assessment is assessment which is both delivered and marked by computer. And the computer may provide feedback automatically to students without the involvement of an assessor (or any other person).
Benefits of conducting assessment using technology
Benefits of using technology to conduct assessments include:
consistency of assessments decisions
supports the record keeping of evidence and results
speed of getting the assessment done and providing feedback
reduces the workload of assessors.
Limitations of conducting assessment using technology
Assessment conducted by technology can be used for assessing knowledge, but it is not an appropriate method for assessing performance, such as:
bake a cake
repair a lawn mower
care for an elderly person
unblock a toilet
shear sheep.
The Australian VET system requires evidence of a person’s ability to perform work tasks and activities. Therefore, this type of e-assessment should only be used to assess knowledge, and in some cases, the breath and depth of knowledge required cannot be covered by this application of technology.
This method of computer-based assessment requires the design and development of valid test questions with answers (and sometimes responses to be sent as automatic feedback). Not every trainer will have the ability to develop effective test questions. Usually, the writing of test questions would be the responsibility of an instructional designer or resources developer.
What skills are needed to develop computer-based assessment?
An instructional designer or resources developer need the following skills to develop computer-based assessment:
Write questions
Write answers
Write feedback
Upload to computer-based assessment instrument.
The value of computer-based assessment is that there is no involvement of an assessor during the assessment process. The computer-based assessment instrument performs the assessment and records the result.
Some trainers and assessors may be involved in the development of questions, answers and feedback for computer-based assessment. However, this would be outside the scope of normal activities for most trainers and assessors.
Assessment supported by technology
It has become common for RTOs to use technology to support the assessment process. Many RTOs have implemented a learning management system (LMS), such as Moodle or Canvas. It is more than 5 years since I worked for an RTO that wasn’t using an LMS for students to submit their assessments.
What does the ‘e’ in e-assessment stand for? Does is stand for ‘electronic’ or ‘everyday’? Assessment supported by technology has become the norm. The distinction between ‘e-assessment’ and ‘assessment’ has dissolved over the past 5 years. It would be rare for the assessment process not to supported by technology.
The Education IRC believes that ‘e-assessment’ is different to ‘assessment’ . How did the Education IRC identify a need to create a ‘new’ e-assessment unit of competency? Most RTOs use technology for students to submit their evidence and support the assessment process conducted by assessors. A very small number of exceptions to this may include:
VET in School programs that use print-based workbooks
Creative arts qualifications, such as, music or dance
Small RTO that have a paper-based system.
Is it difficult to learn how to use an LMS?
No, it is not difficult to learn how to use an LMS. So far, I have had to learn three different LMS platforms. The first LMS I learnt was Moodle. I attended a 3-hour session about how to use it to perform my role as an assessor. This was enough to get me using Moodle to conduct assessments. Later, I started to work for an RTO that used Canvas. About 30 minutes was taken during my induction to show me how to log-on, navigate, and use this LMS. And later again, another RTO emailed me a 5-page document to show how to log-on, navigate, and use Job Ready for conducting assessments.
Learning an LMS is not difficult. The knowledge and skills required to use an LMS are insufficient and insignificant for the creation of a unit of competency. It is possible that using an LMS or other technologies could be identified as a foundation skill for the TAEASS402 Assess competence unit of competency. However, learning one LMS does not negate the need for a newly qualified assessor to need to learn a different LMS once they get employment.
What skills are need to conduct assessments that are supported by technology?
The skills required by an assessor to conduct assessment that are support by technology include:
Prepare for assessment
Brief the candidate
Gather evidence
Support the candidate
Make the assessment decision
Record and report the assessment decision
These skills are the same as what is covered by the current TAEASS402 Assess competence unit of competency.
TAEASS404 Conduct e-assessment of competence (draft)
Feedback about elements and performance criteria
The following are the performance criteria for the first element.
Answers to questions raised
The Training Package Developer has raised some questions. The following are my answers to the three questions raised.
The following are the performance criteria for elements 2, 3 and 4.
There is very little difference between the draft TAEASS404 Conduct e-assessment of competence unit of competency and the current TAEASS402 Assess competence unit of competency.
What happens if we remove duplicate performance criteria from the TAEASS404 Conduct e-assessment of competence unit of competency? After removing all the performance criteria that duplicates what is already in the TAEASS402 Assess competence unit of competency, we are left with five performance criteria. The following table lists the five performance criteria.
Contrary to the Case for Change, there is very little difference between conducting ‘traditional assessment’ (as described by the TAEASS402 unit) and conducting ‘e-assessment’ (as described by the TAEASS404 unit). The TAEASS404 Conduct e-assessment of competence unit of competency is basically an attempt to contextualise the TAEASS402 Assess competence unit of competency. The minor differences does not warrant the development and implementation of new e-assessment unit of competency.
Feedback about foundation skills
A candidate will be required to demonstrate the ability to complete the tasks outlined in the elements, performance criteria and foundation skills (as per the Performance Evidence statement). This makes the foundation skills assessable.
The following are the foundation skills.
Analysis and comprehensive feedback about the foundation skills would require much more time and effort. However, some of the foundation skills seem to poorly describe or do not describe a skill. For example:
Feedback about performance evidence
The following are the performance evidence.
The performance evidence is poorly written. I am an experienced assessor and for many years I have been conducting e-assessment. I find the performance evidence is complicated, confusing, and difficult to comprehend. For example:
How is an e-assessment conduct for a group of students? When does this occur?
What is the size of the group of students? How is an individual’s competence assessed in a group?
What is the definition for an assessment task? Assessing one assessment task is far short of assessing competence.
Will sending an email be appropriate for communicating to candidates electronically?
What does ‘complete feedback for students in relation to their e-assessment completion’ mean?
Why is there a distinction being made between an ‘e-assessment tool’ and an ‘assessment tool’? Why are we using or alternating between the words ‘student’ and ‘candidate? The performance evidence is poorly written and needs to be significantly reworked.
Feedback about knowledge evidence
Analysis and comprehensive feedback about the knowledge evidence would require much more time and effort. However, I believe that a critical analysis of the knowledge evidence should be done prior to implementation.
Feedback about assessment conditions
The following are the assessment conditions.
Why is the term ‘digital assessment tool’ being used? And why is the term ‘e-portfolio being used?
I think the entire Assessment Requirements need to be thoroughly reworked. But I really think that the draft TAEASS404 Conduct e-assessment of competence unit of competency should be scrapped.
In conclusion
Is there a need for a new e-assessment unit of competency? Can the current TAEASS402 Assess competence unit of competency adequately cover the requirements of conducting e-assessment?
Most assessors are using an LMS or other technology when they conduct assessments. Using an LMS or other technology has become fundamental or foundational to conducting assessments. The distinction between conducting ‘e-assessment’ or conducting ‘non-e-assessment’ is blurred. And the knowledge and skills required by assessors to conduct ‘e-assessment’ or ‘non-e-assessment’ are the same.
The current TAEASS402 Assess competence unit of competency adequately covers the requirement of conducting assessment that are supported by technology (e-assessment). At best, the draft TAEASS404 Conduct e-assessment of competence unit of competency is an attempt to contextualise the TAEASS402 Assess competence unit of competency. The minor differences does not warrant the development and implementation of new e-assessment unit of competency.
We do not need a new e-assessment unit of competency.
The assessment tool will always have at least two assessment tasks; one assessment task to gather the knowledge evidence, and another to gather the performance evidence.
The volume or frequency of performance evidence may be used to determine the number of assessment tasks required to gather the required performance evidence. See ‘Golden rule number 4’.
All specified knowledge evidence must be gathered.
A simple assessment strategy is to write at least one question for each item of knowledge evidence specified. Consider each bullet point listed under the heading of Knowledge Evidence as an item of knowledge evidence to be gathered.
Sometimes a bullet point may have sub-bullet points. Each sub-bullet point may require its own question, or it might be possible to use one question to gather evidence that would cover all the sub-bullet points.
The following is a simple assessment strategy that can keep the gathering of performance evidence as simple as possible for assessors to implement and for candidates to understand.
Use the same number of assessment tasks to gather performance evidence that is equal the volume or frequency of performance evidence specified. For example, if the performance evidence specifies that the task must be performed on three occasions, then plan for three assessment tasks to gather the specified quantity of evidence. In this example, the same assessment instrument may be able to be used for each of the three occasions (but this may not always be appropriate).
There must be evidence gather for all specified performance evidence. And this includes gathering evidence for each performance criteria. There are some Training Packages that specify that evidence for foundation skills are also to be gathered. Therefore, the foundation skills become assessible items.
Avoid integrating the gathering of knowledge evidence while gathering performance evidence because this will quickly complicate the assessment task. It can also interrupt the flow of performing a task if the assessor stops the candidate mid-task and starts asking them questions about what they are doing.
The following diagram shows the common assessment methods (and in brackets the evidence to be gathered by the assessment instruments).
Keep the assessment task to gather knowledge evidence separate from the assessment task or tasks used to gather performance evidence.
Note: It is a good idea to gather the knowledge evidence before gathering the performance evidence. If a candidate has insufficient knowledge, they are likely to have difficulties performing the work tasks or activities. It may be best to delay the gathering of performance evidence until the candidate has gained sufficient knowledge.
Do not ask ‘how-to’ questions to gather performance evidence. Performance evidence will require a candidate to perform the task or tasks.
Comply with the specified assessment conditions. This may include location, facilities, equipment or resources required for assessment.
Always trial and review the assessment tool before implementing it.
The following are some check points when trialling and reviewing the assessment tool:
Instructions to assessor are clear and written in plain English
Instructions to candidates are clear and written in plain English
If applicable, instructions to third parties are clear and written in plain English
Headings, sub-heading, page layout and formatting, page breaks, and white space have been used to make assessment documents easy to read and navigate
Assessment instruments can be used for collecting evidence and making judgements, including space provided for results, comments and feedback
Assessment decision-making criteria are provided, for example:
Sample answers for knowledge questions
Criteria for determining the standard of performance
Assessment documents are free of typos and grammatical errors
Assessment task titles and numbering are consistent across all documents.
The assessment tool should never be implemented before it has been trialled or piloted. This is when we find out if the assessment tool works, or not.
Some rules, like ‘Golden rule number 1’ should never be broken.
And sometimes you may need to break some of the rules.
The guiding principle should always be about making the assessment tool as simple as possible for assessors to implement and for candidates to understand.
Do you need help with your TAE studies?
Are you a doing the TAE40122 Certificate IV in Training and Assessment, and are you struggling with your studies? Do you want help with your TAE studies?
Do you want more information? Ring Alan Maguire on 0493 065 396 to discuss.