AI agents in VET: A shortcut to non-compliance?

Introduction

After recently reviewing a suite of VET training and assessment materials purchased from a well-known commercial supplier, I published an article titled, ‘Human versus AI: The future of assessment design’.

The resources I had reviewed were disappointingly unfit for purpose. I identified several critical issues, including:

  • Overly complex numbering and an excessive amount of fragmented documents made navigation difficult.
  • The content was cluttered with unnecessary instructions and jargon that is neither learner-friendly nor used in actual workplaces.
  • The training and assessment materials lacked details and felt like generic templates had been used rather than materials tailored for the Unit of Competency.

The overall quality was bland and disconnected. This is highly characteristic of AI generated content. I later confirmed that this supplier is a ‘leading’ user of AI agents to produce their materials.

This is a following-on article warning all who use, or are considering to use, an AI agent to develop training and assessment materials. Also, it is a warning to RTOs who are intending to purchase training and assessment materials that have been produced by an AI agent.

I am not against using AI. I design and develop training and assessment materials, and I use an AI chatbot to assist me.

Let’s first look at the difference between an AI chatbot, AI assistant, and AI agent.

What is the difference between an AI chatbot, AI assistant, and AI agent?

An AI chatbot describes the ‘chat’ format or interface with AI. An AI assistant describes the overall role of helping the user. And an AI agent describes an AI that can act autonomously.

In the Australian VET system, the distinction between these three tools is defined by their autonomy and integration into an RTO compliance workflow.

Here is one specific example of how an instructional designer might use each of the three AI applications.

AI chatbot: The conversational researcher

When unpacking a new unit of competency, a chatbot acts as a reactive sounding board. You manually copy technical jargon or Performance Criteria into a separate window to request plain-English explanations or workplace scenarios. It requires a constant back-and-forth exchange, where the AI only knows what you explicitly provide in the chat. This manual ‘copy-paste’ workflow makes it a useful external tool for brainstorming and simplifying complex training requirements.

AI assistant: The integrated co-writer

As you draft learner guides or assessment tools within your word processor, an AI assistant works alongside you in real-time. Because it is context-aware, it ‘sees’ your active document, allowing it to suggest knowledge checks or generate marking rubrics based on your specific text. You can refine your tone or create content without switching windows. This integrated approach streamlines the design process by providing immediate, relevant support inside your workspace.

AI agent: The autonomous worker

For complex tasks like gap analysis, an AI agent operates with high autonomy. Once you set a goal, such as auditing assessment documents against a unit’s requirements from training.gov.au, it proactively executes a multi-step workflow. The agent navigates sites, downloads requirements, and identifies evidence gaps across files without further prompting. Unlike reactive tools, it completes the entire project independently and delivers a finished mapping matrix directly to your inbox.

The following is a summary comparing the above three AI applications.

Using AI agents to develop training and assessment materials

While AI agents offer significant efficiency in automating high-volume tasks, their use within the Australian VET sector, specifically under the 2025 Standards for RTOs, poses significant risks when developing training and assessment materials.

Here are five ways that relying on an AI agent can degrade the quality of training and assessment materials

The compliance illusion

AI agents excel at keyword matching but lack the expert judgment to determine if a task measures competency. An agent might incorrectly flag an assessment tool as ‘fully mapped’ just because it identifies specific terms from a Performance Criteria. However, it cannot determine if the task actually represents a valid or authentic measure of competency in a real-world workplace. This creates a ‘compliance illusion’ that can lead to non-compliance during a compliance audit.

Compromised intellectual property

Developing high-quality, training and assessment materials requires significant investment. Unless you are using a private AI system, uploading an RTO’s documents can mean your IP is used to train external AI models. For many RTOs, this is not just a quality issue but a major breach of data sovereignty and a loss of competitive advantage.

Pedagogically flawed

Training Packages on training.gov.au are complex and frequently updated. An AI agent may inadvertently pull historic definitions or draw from outdated datasets. Furthermore, it often lacks the ability to interpret the Companion Volume Implementation Guide, which provides the essential context for how a unit should actually be delivered and assessed, leading to mapping that may be technically correct but pedagogically flawed.

Lack of accountability for ‘hallucinated’ mapping

If an AI agent produces a mapping matrix that claims a specific content or assessment item covers a Performance Criteria or Foundation Skill when it actually doesn’t, the responsibility still rests entirely with the RTO. Unlike a human instructional designer who can provide an evidence-based rationale, an agent cannot justify its professional judgment. This lack of accountability results in unreliable mapping.

Erosion of contextualisation

A core requirement of the VET sector is contextualisation. This means tailoring training and assessments to a specific industry or learner cohort. AI agents tend to produce generic, one-size-fits-all training and assessment materials. Relying on an autonomous agent risks producing ‘cookie-cutter’ materials that fail to meet compliance or contextualised requirements.

Conclusion: Efficiency must not replace expertise

The allure of ‘set and forget’ AI agents for resource generation and compliance mapping is tempting for the time-poor VET sector. However, there is a vast chasm between functional automation and quality materials. Speed is irrelevant if the output fails a compliance audit.

Outsourcing instructional design to autonomous AI agents risks sacrificing human professional judgment. While AI can complete complex tasks at lightning speed, it lacks the capacity to understand workplace nuances, specific learner cohorts, or the pedagogical depth of a Training Package.

For RTOs, the warning is clear. Investigate how developers of training and assessment materials have used AI. Is it a chatbot for research, an assistant for drafting, or an agent for autonomous creation? As human oversight decreases, the risks to compliance and learner outcomes increase.

Technology should be embraced as a tool, not a replacement. Use chatbots to brainstorm or assistants to refine prose, but keep the human instructional designer at the centre of the development process. In an era of AI agents, human expertise is the only safeguard against a ‘cookie-cutter’ future.

Please tell me what you think!

Human versus AI: The future of assessment design

Introduction

Recently, I reviewed a suite of VET training and assessment materials purchased from a well-known commercial supplier. Despite the provider’s reputation, the resources were disappointingly unfit for purpose. Focusing specifically on the assessment components, I identified several critical issues:

  • Poor usability: Overly complex numbering and an excessive amount of fragmented documents made navigation difficult.
  • Language and literacy barriers: The content was cluttered with unnecessary instructions and jargon that is neither learner-friendly nor used in actual workplaces.
  • Lack of context: Assessments lacked specific scenario details and felt like generic templates rather than materials tailored to the unit of competency being assessed.

The overall quality was bland and disconnected. This is highly characteristic of AI generated content. I later confirmed that this supplier is indeed a ‘leading’ user of AI to produce their materials. This serves as a stark reminder: while AI is a powerful tool, it cannot replace the human expertise required to create meaningful, compliant VET resources.

Structuring assessment tasks

While there are typically multiple ways to structure assessment tasks, the quality of that design varies significantly. At the highest level, a structure is effective, efficient, and compliant, balancing regulatory requirements with a smooth user experience. Other designs may be adequate and compliant but ultimately burdensome, creating unnecessary hurdles for both the learner and the assessor. More concerning are structures that are inadequate but appear compliant on the surface, masking deeper flaws. Finally, some structures are simply inadequate and obviously non-compliant, failing to meet the basic standards required for a valid assessment.

To illustrate these differences in practice, I have provided the following three distinct comparisons between AI-generated and human-designed assessment tasks across various industry sectors. These three examples highlight how a human-led strategy ensures that the structure remains both pedagogical and practical. While the AI versions may tick boxes in a literal sense, the human-designed versions demonstrate a deeper understanding of how to weave complex requirements into a logical, streamlined workflow that supports an effective, efficient and compliant assessment process.

Example 1. BSBCMM411 Make presentations

The following is the Performance Evidence for the BSBCMM411 Make presentations unit of competency.

The following are assessment tasks generated by AI.1

The following show the assessment tasks generated by a human.2

The following is a list1 of five reasons why the human-generated assessment structure for BSBCMM411 unit is superior to the AI-generated version.

  • Logical chunking of workflow: The human version groups the planning, delivery, and review into a single cohesive task for each presentation (Task 2 and Task 3), whereas the AI splits the planning and delivery into entirely separate tasks.
  • Reinforcement of the full cycle: By requiring the candidate to complete the entire cycle (Plan-Deliver-Review) for the first presentation before moving to the second, the human structure allows for immediate application of “lessons learned”.
  • Explicit material development: The human-generated structure explicitly includes the “development of presentation aids” within the planning phase, ensuring this critical requirement is not overlooked, while the AI description is more generic.
  • Clarity on “different” scenarios: The human structure clearly mandates that Task 3 must be a second presentation that is “different to the presentation delivered in Task 2”, providing a clear instruction for meeting the unit’s diversity requirements.
  • Reduced administrative confusion: In the AI structure, an assessor must jump back and forth between Task 2 (Planning) and Task 3 (Delivery) to grade one presentation. The human structure allows an assessor to finalise all evidence for “Presentation 1” within a single task block.

Example 2. CHCECE037 Support children to connect with the natural environment

The following is the Performance Evidence for the CHCECE037 Support children to connect with the natural environment unit of competency.

The following are assessment tasks generated by AI.1

The following show the assessment tasks generated by a human.2

The following is a list1 of three reasons why the human-generated assessment structure for CHCECE037 unit is superior to the AI-generated version.

1. Direct alignment with assessment requirements

The Performance Evidence explicitly requires evidence of supporting children’s knowledge on three occasions.

  • Human Design: Tasks 2, 3, and 4 in the human version clearly provide these three distinct opportunities (Indoor, Outdoor, and Aboriginal/Torres Strait Islander focused).
  • AI Design: The AI version only lists two clear implementation experiences (Experience A and B) in Task 3, potentially failing to meet the “three occasions” mandate.

2. Specific inclusion of cultural perspectives

The unit requires that at least one occasion must involve Aboriginal and/or Torres Strait Islander peoples’ use of the natural environment.

  • Human Design: Dedicates a specific, standalone task (Task 4) to ensure this mandatory requirement is met and observed.
  • AI Design: Completely omits this specific cultural requirement in its brief descriptions, focusing instead on generic activities like “seed growing” or “scavenger hunts”.

3. Clear Indoor/Outdoor distinction

The unit requires one indoor and one outdoor opportunity.

  • Human Design: Explicitly structures Task 2 as an indoor activity and Task 3 as an outdoor activity, ensuring the candidate covers both environments.
  • AI Design: Focuses heavily on the outdoor environment (Task 2 audit and Task 3 “nature play”), without clearly designating or requiring a specific indoor engagement.

Example 3. CPCCCA3010 Install windows and doors

The following is the Performance Evidence for the CPCCCA3010 Install windows and doors unit of competency.

The following are assessment tasks generated by AI.1

The following show the assessment tasks generated by a human.2

The human-generated assessment tasks ensure full compliance with the specific Performance Evidence for CPCCCA3010 unit. The following is a list1 of three reasons why the human-generated assessment structure is superior to the AI-generated version.

1. Inclusion of specific door types

The Performance Evidence requires the installation of a sliding cavity door unit and door and a pair of doors.

  • Human Design: Includes “Task 4” specifically for the sliding cavity door and “Task 5” for the pair of doors.
  • AI Design: Uses generic categories like “External Door” and “Internal Door”, which fails to explicitly require these two specialised installation types.

2. Accurate quantity of installations

  • Human Design: The human-generated tasks align perfectly with the requirement to install “a” (single) standard window
  • AI Design: The AI-generated Task 2 requires the candidate to install two windows, which adds an unnecessary burden not specified in the performance evidence.

3. Integration of planning and installation

  • Human Design: Integrates the “plan” and “prepare” requirements directly into every individual practical task (Tasks 2, 3, 4, 5, and 6). This ensures that the planning is context-specific to the unique requirements of a window, a sliding cavity door, or a pair of doors.
  • AI Design: Separates “Planning & Compliance” into a standalone Portfolio (Task 3). By treating planning as a generic administrative exercise rather than an embedded part of the installation process, the AI version risks a disconnect between the candidate’s theoretical plan and the actual technical preparation required for different types of frames and doors.

Conclusion: Why the human designer is irreplaceable

The examples above highlight a consistent pattern: while AI can generate a list of tasks that look like an assessment, it lacks the professional judgment to design a strategy that is actually fit for purpose.

The disparity between these two approaches boils down to three critical factors:

  • Nuance and compliance: As seen in the CPCCCA3010 and CHCECE037 examples, AI frequently misses specific requirements that are essential for a finding of competency. A human designer reads between the lines of a Training Package to ensure no mandatory evidence is overlooked.
  • Pedagogical workflow:  AI tends to “atomise” tasks into clinical, disconnected steps. In contrast, human designers understand how a job actually functions. By grouping planning, execution, and review into a single cohesive task, as seen in the BSBCMM411 example, humans create a natural assessment flow that mirrors real-world workplace practice rather than a fragmented digital checklist.
  • The “Goldilocks” principle of evidence: AI often oscillates between two extremes: providing too little detail or creating “assessment bloat” by requiring more work than is necessary. A human expert knows how to design a strategy that is “just right”, meeting every requirement specified by the unit of competency without placing an unnecessary administrative burden on the learner or the assessor.

AI is a powerful assistant for brainstorming or drafting, but it is a poor architect. In the high-stakes environment of VET compliance, an assessment strategy is more than just a document. It is a roadmap that needs to be accurate and compliant. The “human-in-the-loop” must remain the “human-at-the-helm.”

Investing in human-led design isn’t just about avoiding “bland” materials; it’s about ensuring that our VET students are truly competent and that our RTOs remain compliant.

Footnotes:

1 On the 2nd of March 2026, Gemini was the AI platform used to generate the assessment tasks for the three examples. It was also used to compare the assessment structure generated by AI and the human.

2 Alan Maguire was the human who generated the assessment tasks for the three examples. He has had more than 40 years experience designing training and assessment. Alan may be getting older, but he is not yet redundant.

Dissatisfaction with the TAE40122 qualification

It is no secret that the TAE40122 Certificate IV in Training and Assessment is disliked by many people.

Every six months over the past 2 years I have conducted a poll to find out if people were enjoying their Certificate IV in Training and Assessment.

The following graph shows the most recent poll result and the results from previous polls.

And here is an analysis of the most recent poll compared with previous polls.

The result from November 2023 shows that 50% of people studying for their Certificate IV in Training and Assessment were enjoying it and 50% were not enjoying it, or only sometimes enjoying it. This is when most people were doing the TAE40116 qualification.

The results in July 2024 and November 2024 shows a massive decrease in satisfaction and massive increase in dissatisfaction. This is the year when the TAE40122 qualification began to be implemented by most RTOs.

The two results for 2025 shows an increasing satisfaction and subsequent decreasing dissatisfaction. I have assumed this is because RTOs have been improving the way they deliver the TAE40122 qualification. The November 2025 result shows 33% are satisfied. However, this is not a good result since two thirds of people are dissatisfied.

Does it matter if people enjoy doing their Certificate IV in Training and Assessment? Yes, it matters. If people are not enjoying it, then they become dissatisfied, and some get confused, frustrated, experience self-doubt, and the barriers to learning are increased.

Sadly, if you are not enjoying your Certificate IV in Training and Assessment, you are not alone.

Do you need help with your TAE studies?

Are you a doing the TAE40122 Certificate IV in Training and Assessment, and are you struggling with your studies? Do you need help with your TAE studies?

Ring Alan Maguire on 0493 065 396 to discuss.

Contact now!

logo otws

Training trainers since 1986

Australia’s VET system: The top 5 topics

Based on recent reports and ongoing discussions, the top 5 topics relating to the Australian VET system are:

  1. Quality and consistency of training
  2. Engagement and responsiveness to industry
  3. Funding models and financial sustainability
  4. VET workforce
  5. Tertiary harmonisation and pathways.

This article is a bit long. I hope you can make it to the end.

1. Quality and consistency of training

Quality and consistency of training remains a critical issue. While the VET sector is valued, there are ongoing concerns about the consistency of training quality across different Registered Training Organisations (RTOs) and TAFE courses. This includes ensuring that graduates have the relevant skills, that training is of a high standard, and that there’s enough focus on practical skills. The Australian Skills Quality Authority (ASQA) plays a key role in regulating and auditing RTOs to ensure compliance with the VET Quality Framework, but quality issues persist.

ASQA has been shifting its regulatory focus, moving away from extensive external audits towards a model of self-regulation for RTOs. This new approach emphasises an RTO’s internal ability to monitor, evaluate, continuously improve, and manage risks related to training quality. However, this shift presents a potential for failure due to an inherent conflict of interest. Providers might prioritise financial gain over genuine quality, potentially leading to a decline in overall standards within the sector.

The move towards self-regulation and a perceived lack of independent scrutiny may contribute to an environment where fraudulent activities can occur more easily. Relying on internal monitoring systems carries significant risks, as these can be manipulated or under-resourced. Proactive, regular external audits would likely be more effective in identifying potential issues early on, rather than waiting for problems to be reported through ASQA’s “VET tip-off line” after the fact.

The current spate of de-registered RTOs and cancelled qualifications may be linked to a lack of onsite audits being conducted by the regulator.

2. Engagement and responsiveness to industry

A crucial aspect of VET is its ability to meet the rapidly changing needs of employers and industries. There’s a strong focus on strengthening industry engagement to ensure that VET qualifications and training programs are relevant and aligned with current and emerging workforce demands. The establishment of Jobs and Skills Councils (JSCs) is a recent reform aimed at giving industry a stronger voice in identifying skills needs, developing training products, and collaborating with providers.

Since the establishment of the current Australian VET system in 1992, industry and employers have been positioned to give advice on workforce skill needs, VET qualifications, and units of competency. These industry-led groups have been the National Industry Training Advisory Boards, that were replaced by Industry Skills Councils (ISCs), that were replaced by Industry Reference Committees (IRCs), that have now been replaced by Jobs and Skills Councils (JSCs).

Each change has been designed to give industry a stronger voice and to streamline the training product development process. I agree that Australia’s VET system should be responsive to industry and employers. And I agree that Australia’s VET system should be national as well as industry led. However, many concerns expressed by industry and employers would be resolved if RTOs were more responsive to ‘local’ needs. There is significant flexibility for training products to be customisation and contextualisation.

Lack of responsiveness and flexibility can often by fixed at the local level by RTOs, rather than changing the training package development process.

3. Funding models and financial sustainability

The financial foundations of the VET sector are under pressure. Traditional funding models consist of a mix of government allocations and student fees. RTOs, including TAFEs, raise concerns that there isn’t adequate and sustainable funding to develop quality training and provide sufficient student support services.

In recent years, the Australian Government and the governments of states and territories have been prioritising funding to TAFEs. TAFEs have been, and continue to, spend a lot of money on fancy buildings and expensive technological infrastructure. It is questionable if learning is improved by TAFEs having these new buildings and advanced technology.

I think that there will never be enough funding. And I think that the dilemma of wanting high-quality but low-cost training will continue to be an unresolvable problem for the VET system. However, I will be happy to be proven wrong.

4. VET workforce

A major challenge is attracting, retaining, and developing a skilled VET workforce. National strategies are being developed to grow the workforce and improve retention, including the ‘Credential Policy’. This policy came into effect alongside the new Standards for Registered Training Organisations on the 1st of July 2025. I believe that this policy will have no or limited impact on the VET workforce, nor on improving training quality. Again, I will be happy to be proven wrong.

5. Tertiary harmonisation and pathways

Over the last few years, some VET influencers and some VET lobby groups have been saying that there needs to be a better connection between the VET system with Higher Education. Their goal is to break down the barriers between VET and university pathways. This is not a new idea.

Background

The Australian Qualifications Framework (AQF) was first introduced in 1995. It included ‘articulation arrangements’. These arrangements were a set of principle to assist the establishment of connections between different qualifications. An entire section of this first AQF was devoted to ‘articulation arrangements’. [1]

The Australian Qualifications Framework (AQF) was revised and republished in 2011. A second edition of the revised AQF was published in 2013. The revised AQF clearly states an objective to support the development and maintenance of pathways which provide access to qualifications and assist people to move easily and readily between different education and training sectors. [2] [3]

The AQF aims to assist people to plan their career progression. In this way, it encourages lifelong learning. The AQF 2013 defines ‘articulation arrangements’ as arrangements that enable students to progress from a completed qualification to another with admission and/or credit in a defined qualification pathway. [3]

In 2012, the Standards for Training Packages were published. In this document, the term ‘articulation arrangements’ was replaced by ‘credit arrangements’. These Standards for Training Packages have been republished several times, and in these documents ‘credit arrangements’ has been defined as the arrangements existing between Training Package qualifications and Higher Education qualifications. [4]

What’s new?

Thirty years has passed since the AQF was first published, and thirteen years has passed since the Standards for Training Packages were published (replaced by the Training Package Organising Framework on the 1st of July, 2025). Not one ‘credit arrangement’ was established. Not one nationally agreed articulation arrangement has been established.

Routinely, the universities have been unwilling to recognised VET qualifications. The disconnect between the VET system and Higher Education has been impenetrable. So, some VET influencers and some VET lobby groups have taken a new approach. This approach is called tertiary education harmonisation.

Tertiary education harmonisation means VET and Higher Education work more closely together. The aim is for a more seamless and aligned tertiary education system. This does not imply that Australian will merge Higher Education and VET into one system. ‘Alignment’ is not the same as ‘merge’.

VET and Higher Education would remain separate sectors with important differences in their missions and their approaches to learning. The Australian Government is currently investing $27.7 million over 4 years to 2027-28, including $15.9 million specifically for VET to improve ‘tertiary education’ collaboration. The only tangible deliverable at the moment seems to be a ‘roadmap’ to be developed. [5] [6]

Is a ‘roadmap’ worth $27.7 million investment?

How will VET spend $15.9 million to improve ‘tertiary education’ collaboration? And who will get this money?

Will ‘alignment arrangements’ achieve the same outcome as ‘articulation arrangements’ and ‘credit arrangements’? In other words, achieve nothing.

In conclusion

After more than 30 years, the VET system continues to be plagued with problems. The entire VET system is currently be changed.

  • ASQA’s new regulatory approach promoting self-regulation and the new Standards for RTOs (effective from the 1st of July 2025) aim to improve the quality and consistency of training.
  • Establishment of Jobs and Skills Councils and the new Training Package Organising Framework (effective from the 1st of July 2025) aim to improve engagement and responsiveness to industry’s skill needs.
  • Funding models prioritising TAFEs, strategies to positively impact the VET workforce, and tertiary education harmonisation aim to improve the Australian VET system.

These changes were started by the Scott Morrison’s government and have continued to be implemented by the Anthony Albanese’s government. But will all these changes make the Australian VET system better?

What do you think?

Foundation Skills have changed

The 2025 Training Package Organising Framework has replaced the 2012 Standards for Training Packages. The 2025 Training Package Organising Framework makes a significant change regarding Foundation Skills:

  • The definition of Foundation Skills has changed, and
  • The information about Foundation Skills has changed.

The definition of Foundation Skills has changed

The 2012 Standards for Training Packages required Foundation Skills to be documented at the Unit of Competency level. Also, the 2012 Standards for Training Packages clearly defined Foundation Skills as the language, literacy, numeracy and employment skills. [1]

Training Package developers described Foundation Skills that specifically related to the Unit of Competency. For example, the following shows the Foundation Skills that have been specified for the BSBSUS211 Participate in sustainable work practices unit.

In the above example it describes three language, literacy and numeracy skills and several employment skills (teamwork, initiative and enterprise, self-management, and technology). This information about Foundation Skills will no longer be required at the Unit of Competency level. Also, this detailed information about Foundation Skills that specifically relates to the Unit of Competency will no longer be provided.

The 2025 Training Package Organising Framework makes a significant change to Foundation Skills. Instead of specifying Foundation Skills at the Unit of Competency level, Foundation Skills are to be specified within the Qualification or Skill Set. However, a Training Package developer may document Foundation Skills for a Unit of Competency that is a standalone unit or has high delivery as a single unit. [2]

A standalone unit is defined as a unit that is not packaged as part of a qualification. Previously, all units had to be packaged as part of a qualification. This requirement has changed. [2]

An example of a single unit with high delivery is First Aid. [2]

The 2025 Training Package Organising Framework redefines Foundations Skills. Foundation Skills are now defined as the five Australian Core Skills Framework (ACSF) skills: [2] [3]

  • Learning skills
  • Reading skills
  • Writing skills
  • Oral communication skills
  • Numeracy skills.

Specifying digital literacy skills is optional. [2]

The information about Foundation Skills has changed

The 2025 Training Package Organising Framework require Foundation Skills to be specified within the Qualification or Skill Set. The Training Package developer may document Foundation Skills within a Unit of Competency that is a standalone unit or has high delivery as a single unit, but this is optional. [2]

Importantly, the information about Foundation Skills provided by Training Package developers has changed. Instead of providing detailed information about relevant Foundation Skills for a Unit of Competency, the Training Package developers will state the required ACSF level for each of the five Core Skill from the ACSF and display this information as a bar chart for qualifications and skill sets. For example: [2]

Specifying digital literacy skills is optional. The Training Package developer may specify digital literacy skills as a descriptive statement below the Foundation Skills bar chart. [2]

The following table compares pre-2025 Foundation Skills and post-2025 Foundation Skills.

I hope the last row in the above table clearly shows how information about Foundation Skills provided by Training Package developers are significantly changing.

In conclusion

Units of Competency are the building blocks for Qualifications and Skill Sets. Each Unit of Competency has its own unique foundation skill requirements. The Foundation Skills bar chart for a Qualification or Skill Set provides no information relevant to foundation skills required to perform work tasks covered by any particular Unit of Competency.

When the 2012 Standards for Training were implemented many people complained about losing useful Range Statement information. As the 2025 Training Package Organising Framework are implemented, I wonder if people are going to complain about losing useful Foundation Skills information.

“You don’t know what you have until it’s gone.”

References

[1] 2012 Standards for Training Packages (last updated in 2022)

[2] 2025 Training Package Organising Framework

[3] Australian Core Skills Framework

Do you need help with your TAE studies?

Are you a doing the TAE40122 Certificate IV in Training and Assessment, and are you struggling with your studies? Do you need help with your TAE studies?

Ring Alan Maguire on 0493 065 396 to discuss.

Contact now!

logo otws

Training trainers since 1986