An abridged history about changes to units of competency

The current Australian VET system was implemented in 1992. Since it was implemented, there has been many changes. One of those changes is how units of competency have been documented. The initial units of competency looked very different to today’s units of competency.

The first training packages in the Australian VET system were endorsed in 1997. This began the standardisation of units of competency across the different industry sectors. It should be noted that ‘standardisation’ has never resulted in all units of competency looking the same. There have always been some variations.

In 2012, a ‘new’ format for units of competency was introduced. The Standards of Training Packages specified this new format.

The changes specified by the Standards of Training Packages included:

  • Inclusion of Foundation Skills
  • Separation of the Unit of Competency and Assessment Requirements (to separate documents).

It took ten years before all units of competency complied with the 2012 Standards of Training Packages.

And this year, effective from the 1st of July 2025, there will be another ‘new’ format for units of competency. However, to add to the complexity of the Australian VET system, there will be two new formats for units of competency, rather than one. One of the new formats is very similar to the current format, but the other new format looks more like a description of curriculum, rather than a description of competency. This is turning back the clock to before 1992, because it was 1992 when competency-based training and assessment was being introduced to replace the failing curriculum-based system.

These changes are specified by the Training Package Organising Framework.

An example: changing units of competency

The ability to make presentations has not significantly changed over the past decade. However, the relevant unit of competency was updated in 2015, and updated again in 2020. Why did we need to change from the 2015 BSBCMM401 Make a presentation unit to the 2020 BSBCMM411 Make presentations unit? Let’s quickly compare these two units.

The first difference is that the BSBCMM401 Make a presentation unit is singular, and the BSBCMM411 Make presentations unit is plural. Singular refers to making one presentation, while plural refers to making more than one presentation. It could be argued that if you can competently make a presentation, you would have the ability to make another presentation.

The second difference which follows on from the first difference relates to the performance evidence. The performance evidence for the BSBCMM411 Make presentations specifies the delivery of two presentations, while the performance evidence for the BSBCMM401 Make a presentation unit specified that at least one presentation was delivered. This change is underwhelming.

The following table compares the elements and performance criteria for the two units.

The above shows that there are three differences:

  • Performance criteria 1.4 for the BSBCMM401 unit has been removed
  • Rewording has reduced the size of some performance criteria, and in some cases, this makes the performance criteria easier to read
  • The number of performance criteria for Element 2 has been reduced from six to three.

This last point about a reduced number of performance criteria is deceptive because two of the three performance criteria that have removed from Element 2 are covered by the Foundation Skills for the BSBCMM411 unit. Overall, the change from BSBCMM401 to BSBCMM411 made a slight improvement. It is debatable that the change was necessary.

Déjà vu: changing units of competency

People who are new to the Australian VET system may not experience it, but many people who have been around for a while may experience déjà vu relating to the current and future changes to the units of competency. One document became two documents, then two documents have become one document again.

In 2012, a unit of competency was one document with two parts:

  • Unit of Competency
  • Evidence Guide.

After 2012, we introduced a ‘new’ format for units of competency consisting of two documents:

  • Unit of Competency
  • Assessment Requirements (replaced the Evidence Guide).

In April 2025, the training.gov.au website combined the two parts of the units of competency into one document again. This document has two parts:

  • Unit of Competency
  • Assessment Requirements.

The following illustrates how the make presentations unit has changed over the past decade, from one document to two documents, and back to being one document.

This recent change to a ‘one document format’ is consistent with the format for units of competency specified by the Training Package Organising Framework.

In conclusion

It seems that much effort goes into making changes, and the implementation of every change costs money and consumes valuable resources. Units of competency changing from one document to two documents, and back to one document may be considered trivial.

But be aware, the change from competency-based training and assessment to curriculum-based training and assessment is significant. Especially, if the providers of training and assessment begin to determine the curriculum, rather than industry and employers determining the competencies.

We seem to be returning the Australian VET system back to before 1992. It wasn’t great then, and it won’t be great for our future.

Using AI is not learning

Introduction

As a trainer and assessor, I have been delivering the Certificate IV in Training and Assessment qualification since it was released in 2004. Over the past two decades I have seen many changes. A new phenomenon has recently appeared.

Over the past two years the answers to knowledge questions that are submitted for assessment have significantly improved. Two years ago, I would have seen many poorly written answers with spelling and grammatical errors. Last year, there was a noticeable improvement with far less spelling and grammatical errors. This year, most answers to knowledge questions are very well written.

Usually, at least half of the participants attending my Certificate IV in Training and Assessment courses have English as their second language. And I have come to expect spelling and grammatical errors. But things have changed. Miraculously, I am now assessing written answers to questions that seem to be too good to believe.

Also, I am seeing many more people spelling words using American English rather than Australian English. I am seeing the letter ‘z’ far too often.

What has happened?

Over the past two years there has been a substantial uptake in people using AI. Like many people, I too use AI often. And like many people, I find it to be useful.

As a trainer, I tell my participants that AI may be useful. However, I asked them not to use AI for answering their knowledge questions. I tell them that there are five ways I can tell if a response has been generated by AI:

  • Consistency: AI responses are often highly consistent in tone, style, and factual accuracy, making them seem almost too perfect.
  • Pattern Recognition: Look for repetitive phrases, unnatural sentence structures, or overreliance on certain keywords.
  • American English Bias: AI may favor American English, using “z” instead of “s” in words like “analyze” or “realize.”
  • Numbered Lists: AI often generates numbered lists, even when they are not explicitly requested.
  • Key Phrase Followed by Colon: Pay attention to responses that frequently use a key phrase followed by a colon, followed by additional information. This is a common pattern in AI-generated text.

By the way, I used AI to generate the above list.

People are using AI

I am assuming that many participants studying for a vocational education and training (VET) qualification are using AI. And I will assume that the number of participants using AI will grow. It is likely that some participants will be tempted to use AI to help them answer their knowledge questions.

Some participants make it easy to identify when an answer has been generated by AI. I see answers with the following characteristics:

  • Key Phrase Followed by Colon: Responses that have used a key phrase followed by a colon, followed by additional information.
  • Over capitalisation (using too many capital letters)
  • The letter ‘z’.

Grammarly is AI

Recently, I asked one of my participants if they were using AI to answer the knowledge questions. They told me that they were not. As I showed the participant why I had asked my question, they told me that they use Grammarly. Luckily, I knew that Grammarly is AI because I was able to inform them that the use of this application was likely to be doing more than just correcting spelling and grammatical errors. The participant agreed and said that they would immediately remove the application.

The following is a snippet from the Grammarly homepage.

Grammarly will write text, not just correct spelling and grammar. The same thing is likely to be happening for people with English as their second language when they are using translation apps. I’m not sure, but if you know, I would be happy to hear from you.

Using AI to investigate the use of AI

Many answers to knowledge questions are looking too perfect to have been written by a human. But, how do I know if an answer has been generated by AI? I provided the answer from one of my participants and I asked AI if it had been written by AI. Here is AI’s response.

AI tells me that it is highly probable that the text was generated by AI.

This backs up my hunch that the participant’s answer to the knowledge question was likely to have been generated by AI. And I have a hunch that many participants are using AI to write answers to their knowledge questions.

AI is getting better

Two years ago, even a year ago, I would have been getting many more incorrect answers from AI. It is continuously getting better and because it is connected to the internet, the AI-generated responses can be astonishingly accurate. Here are examples when I have asked AI to answer two different knowledge questions.

Example 1

I did not provide the table. AI generated it.

Example 2

There was no need to go to the website. AI provided the link.

AI can give wrong answers

Although AI is getting better, it still can give the incorrect answers.

Here is an answer to a knowledge question submitted by a participant.

The correct answer that I’m looking for is, ‘JSA stands for Jobs and Skills Australia’.

I asked AI, ‘what does JSA stand for’, and the following is what I got.

This tells me that the participant probably got their answer from AI. As an assessor, it is good that AI is still providing some incorrect answers.

In conclusion

Participants studying for VET qualifications are using AI. On one hand we are encouraging our participants to use AI to help them perform their work. But on the other hand, we tell our participants not to use AI to answer the knowledge questions.

Regardless of what we say, some participants are using AI to answer their knowledge questions. Their answers may have the following characteristics:

  • Answers that are very well written without spelling and grammatical errors
  • Answers that are in a format that looks AI-generated
  • Answers with the letter ‘z’
  • Answers that are obviously incorrect.

I believe that many participants will use AI. And I believe that many participants will not use AI as a tool to help them learn something. Instead, it is only being used to blindly answer questions – no thinking involved.

Using AI is not learning.

It would be good to hear what you think about this topic.

Please contact me, Alan Maguire on 0493 065 396, if you need to learn how to legitimately use AI as a trainer and assessor, or legitimately use AI if you are studying for their TAE40122 Certificate IV in Training and Assessment qualification.

Do you need help with your TAE studies?

Are you a doing the TAE40122 Certificate IV in Training and Assessment, and are you struggling with your studies? Do you want help with your TAE studies?

Ring Alan Maguire on 0493 065 396 to discuss.

Contact now!

logo otws

Training trainers since 1986

Unpacking units of competency

Students in the TAE40122 Certificate IV in Training and Assessment course will unpack at least two units of competency before they develop training programs for them. Usually, unpacking a unit of competency requires:

  • Interpretation
  • Contextualisation
  • Reconstruction.

Annotating the unit

By actively annotating the unit of competency and assessment requirements, you unpack the unit, effectively reading and analysing it to determine the content for a competency-based training program.

Competency-based training is training based on the competency, and competency is described by the unit of competency and assessment requirements.

Unpacking the unit of competency will identify:

  • Task or tasks to be performed
  • Knowledge required to perform the task or tasks
  • Skills required to perform the task or tasks.

The annotated unit is a disposable document

The annotated unit of competency and assessment requirements is a disposable document. It can be disposed of after the training program has been developed. And it does not matter if anyone else can read your annotations. It is your document for you to identify and understand the content of the training program.

The process of unpacking a unit to what’s important, not the document resulting from the ‘unpacking process’. 

Annotation techniques

Unpacking can be done using:

  • Pen on paper
  • On the computer using the Word version of the complete unit of competency and assessment requirements.

The annotation techniques that can be used include:

  • Write text
  • Use colours
  • Use highlighting
  • Use arrows, circles or other shapes
  • Give numbers to foundation skills (FS), performance evidence (PE), and knowledge evidence (KE)
  • Make connections.

Every unit of competency is unique

There are different types of units:

  • Some units are procedural and clearly describe the performance of a task
  • Some units are procedural and relate to the performance of more than one task
  • Some units are not procedural but do relate to the performance of one or more tasks
  • Some units relate to learning a skill and do not directly relate to performance of a task
  • Some units relate to learning knowledge and do not directly relate to the performance of a task
  • Some units are vague and attempt to describe interpersonal and behavioural traits.

We should expect all units to be unique. And we should expect all units to be ambiguous, and this will require us to interpret and contextualise the unit.

Unfortunately, we can expect some units to have been badly written.

This article has been published as an introduction to how to unpack a unit of competency. Additional information shall be published and linked to this article.

Please contact me, Alan Maguire on 0493 065 396, if you need to learn how to unpack a unit of competency.

Do you need help with your TAE studies?

Are you a doing the TAE40122 Certificate IV in Training and Assessment, and are you struggling with your studies? Do you want help with your TAE studies?

Ring Alan Maguire on 0493 065 396 to discuss.

Contact now!

logo otws

Training trainers since 1986

Risk-based approach: How to determine sample size for assessment validation

Introduction

The Standards for Registered Training Organisations (RTOs) 2015 required an RTO to review a statistically valid sample of the assessments. The national VET regulator, Australian Skills Quality Authority (ASQA) provided an online calculator to determine the sample size so that it would be statistically valid.

ASQA’s Validation sample size calculator has been used to calculate the statistically valid sample size for the following two examples. [1]

Example 1

Example 2

The new Standards for RTOs 2025 has introduced a significant change to assessment validation. Instead of a fixed requirement, RTOs are now required to adopt a risk-based approach to determine their validation sample size. This means the number of assessments validated will vary considerably across RTOs, reflecting their individual risk assessments.

Select the units to be validated

The new Standards for RTOs 2025 states that “every training product on the organisation’s scope of registration is validated at least once every five years and on a more frequent basis where the organisation becomes aware of risks to training outcomes, any changes to the training product, or receives relevant feedback from VET students, trainers, assessors, and industry.” [2]

What is a training product?

The new Standards for RTOs 2025 defines training products as:

  • VET Qualification
  • Skill set
  • Unit of competency
  • Accredited short course or module.

How many units per qualification should be validated?

ASQA has provided the following guidance for RTOs: [3]

“At least two units from each qualification must be validated; however, your RTO may choose to validate more if validation of the two units identifies risks or a potential harm to learners who may not have met the required assessment outcomes, inconsistent assessment judgements have been made by assessors or assessment has not been conducted in accordance with the Principles of Assessment or the Rules of Evidence.”

Prioritising high-risk units

When RTOs prioritise the validation of high-risk units over low-risk ones, they are strategically focusing their quality assurance efforts where they matter most. High-risk units often involve complex skills, critical safety implications, or significant industry impact. By concentrating validation on these areas, RTOs can identify and rectify potential assessment flaws that could lead to serious consequences, such as workplace accidents or compromised professional standards. This approach ensures that training quality is rigorously maintained in the most crucial areas, safeguarding both learner outcomes and industry integrity. Essentially, it’s about maximising the impact of validation resources by addressing the areas with the greatest potential for negative consequences.

Identifying risks

The new Standards for RTOs 2025 states that a risk-based approach should be used to determine the sample size of assessments that should be validated. It’s important to understand that the risk-based approach in the Australian VET sector is about ensuring quality and compliance. Therefore, the risks considered relate to factors that could negatively impact those outcomes. Here are five risks that RTOs could consider when determining assessment validation sample sizes:

  • Type of unit
  • Experience of assessors
  • Changes to assessment practices
  • Volume of assessments
  • Historical compliance and validation outcomes.

Risk 1. Type of unit

Units involving high-risk activities, complex skills, or critical safety components require more rigorous validation. The potential consequences of incompetent performance are higher.

Risk 2. Experience of assessors

If assessors are new, less experienced, or are not fully qualified, there is a higher risk of inconsistent or inaccurate assessments. This necessitates a larger validation sample.

Risk 3. Changes to assessment practices

Any recent changes to assessment tools or assessment procedures can introduce inconsistencies. A larger validation sample size helps identify any unforeseen issues.

Risk 4. Volume of assessments

A high volume of assessments within a short period can increase the risk of errors or inconsistencies. Larger sample sizes are needed to maintain quality assurance.

Risk 5. Historical compliance and validation outcomes

A history of non-compliance or poor validation outcomes should lead to a more conservative approach with larger sample sizes. This allows for closer scrutiny and helps build confidence in the RTO’s assessment practices.

The above five risks are examples, not a complete list, of risks that may influence an RTO’s risk assessment. In essence, the risk-based approach should encourage an RTO to prioritise validation efforts where the potential for errors or negative impacts is greatest.

Determining sample size

Let’s look at how a risk-based approach to assessment validation sample sizes might work with some numerical examples. Here are three scenarios.

Scenario 1. High-risk unit

Scenario 2. Medium-risk unit

Scenario 3. Low-risk unit

The numbers in the above three scenarios are examples. The exact percentages will vary depending on the RTO’s own risk assessment and validation policies.

The following table compares the statistically valid sample size with the sample size for the three previous scenarios.

High-risk units should be selected for validation rather than low-risk units. Therefore, the new risk-based approach should not significantly reduce the sample size of assessments to be validated.

Selecting units to be validated

A VET qualification consists of many units of competency. The RTO will need to select at least two units to be validated. The following is a three-step process that can be used for risk-based selection of unit.

  • Step 1. Select the risk assessment criteria
  • Step 2. Create a risk assessment table
  • Step 3. Conduct and document the risk assessment.

Step 1. Select the risk assessment criteria

Here are some examples of risk assessment criteria:

  • Complex skills
  • High-risk activities
  • New, inexperienced or partly qualified assessors
  • New or changed assessment tools
  • Feedback or complaints from students, trainers, assessors, or industry.

Step 2. Create a risk assessment table

The following risk assessment table show an example with four risk assessment criteria. The number of risk assessment criteria shall be determined by the RTO, and this shall determine the number of columns required.

Step 3. Conduct and document the risk assessment

Here are risk assessment examples for two different qualifications.

Example 1

Selection of units to be validated based on the above risk assessment table should consider:

  • Units with newly implemented assessment tools (for example, BSBSUS211 Participate in sustainable work practices)
  • Units assessed by new assessors (for example, BSBTEC201 Use business software applications)
  • Units related to critical areas like safety (for example, BSBWHS211 Contribute to the health and safety of self and others).

Example 2

Unit selection for validation based on the above risk assessment table may prioritise two of the following:

  • SITHFAB025 Prepare and serve espresso coffee
  • SITHACS009 Clean premises and equipment
  • SITXFSA005 Use hygienic practices for food safety
  • SITXWHS005 Participate in safe work practices.

What assessment items must be kept? And how long do these items need to be kept?

ASQA has provided the following guidance for RTOs: [4]

“An RTO must keep all completed assessment items for each student for a period of six months from the date on which the judgement of competence for the student has been made. Completed student assessment items include the actual work completed by a student or evidence of that work, including evidence collected for a Recognition of Prior Learning (RPL) process.

If a student’s actual work is unable to be retained, an assessor’s completed marking guide, criteria, and observation checklist for each student may be sufficient. However, this evidence must have enough detail to demonstrate the assessor’s judgement of the student’s performance.”

Assessment items must be kept for at least 6 months. Some state and territory governments may require RTOs delivery government-funded or subsidised training to keep assessment items for a longer period of time.

Therefore, completed assessment items should be available for conducting assessment validation.

Random selection of assessments

While random selection is a common approach to assessment validation, best practice dictates including assessments conducted by new, inexperienced, or partially qualified assessors. Additionally, a sample of any Recognition of Prior Learning (RPL) assessments should always be included in the validation process.

In conclusion

The Standards for RTOs 2025 replace the previous fixed statistically valid sample size requirements with a risk-based approach. RTOs must now determine their own sample size based on their risk assessment.

Apart from determining the validation sample size, the RTO must select the units to be validated. An RTO should select units that are high risk rather than low risk. Prioritising high-risk units for validation allows RTOs to focus quality assurance where it’s most critical. By concentrating on complex skills and high-impact areas, RTOs can ensure assessment quality is maintained and mitigate potential serious consequences.

References

[1] https://www.asqa.gov.au/resources/tools/validation-sample-size-calculator accessed 15 March 2025

[2] Standard 1.5 (2) (b) https://www.legislation.gov.au/F2025L00354/asmade/text accessed 15 March 2025

[3] https://www.asqa.gov.au/faqs/how-many-units-qualification-should-be-validated accessed 15 March 2025

[4] https://www.asqa.gov.au/faqs/what-student-assessment-items-do-i-need-keep-and-how-long-do-i-need-keep-them accessed 15 March 2025

Do you need help with your TAE studies?

Are you a doing the TAE40122 Certificate IV in Training and Assessment, and are you struggling with your studies? Do you want help with your TAE studies?

Ring Alan Maguire on 0493 065 396 to discuss.

Contact now!

logo otws

Training trainers since 1986

Update from TAE40116 to TAE40122: Shop around for an RTO

During November 2024, I will be presenting a webinar titled, ‘An RPL guide for updating from TAE40116 to TAE40122‘. The webinar will cover a 5-step process:

This is the third article in a series about updating from the TAE40116 qualification to the TAE40122 qualification.

Step 3. Shop around for an RTO

An RTO can determine its own RPL process and associated fees. It is a good idea to contact a few RTOs and gather information about:

  • Cost of the RPL
  • Support provided
  • Flexibility
  • Cost of gap training

Cost of the RPL

The fee charged by RTO for RPL assessment can vary. I just did a quick internet search and found three different RTOs offering RPL at $1,300, $2,100 and $2,400. You may like to check if the RTO you work for is willing to pay for or subsidise your RPL.

Support provided

Cost is one criteria. Other criteria, such as the support provided by the RTO, may be important to you.

  • Does the RTO appear to be friendly and supportive?
  • What support will the RTO provide during the RPL process?
  • How much support will you need?

Another important criteria is the RTO’s willingness to be flexible.

Flexibility

Flexibility is one of the four principles of assessment. Is the RTO willing to be flexible?

  • Can you select the elective units that you want?
  • Is the RTO willing to recognise parts of a superseded and non-equivalent unit as substantive RPL evidence (therefore, no need to repeat training or assessment for those parts of the unit)?
  • Does the RTO willing to adapt or modify there assessment documents used to gather RPL evidence?

Cost of gap training

You may need to do some training to close any gaps. Some people many have a small number of units that can not attained by credit transfer and RPL. Each person applying for RPL will have their own unique circumstances.

  • How many units may be potential gaps?
  • How much would it costs to do gap training?

Compare RTOs

Shopping around for an RTO could save you money. And you could save time associated with the RPL application and assessment process. The following is a table that can be used to help you gather and compare information about different RTOs.

More tips and strategies shall be presented at the upcoming webinar.

Go to the webinar webpage for further details.

Please contact Alan Maguire on 0493 065 396 if you would like more details or if you would like to discuss.

Contact now!

logo otws