THE ASSESSMENT PROCESS

This section describes the ‘Assessment’ process introduced in ‘The UCAM Processes’ section above and executed in order to conduct an assessment.

Requirements for the Assessment process

The requirements for the ‘Assessment’ process are given below in Figure 4.18 which highlights the relevant requirements on the diagram introduced in Figure 4.1 above.

As can be seen in the diagram the main requirement for the ‘Assessment’ process is to ‘Carry out assessment’. The activities that are needed in the process to meet this requirement, and the artefacts that form inputs to and outputs from the process are described in the following section.

Figure 4.18 Requirements for the Assessment process

Contents of the Assessment process

The contents of the ‘Assessment’ process are shown in Figure 4.19, annotated to show whether the various artefacts are inputs or outputs (or both) from the process.

Figure 4.19 shows that there are nine activities that have to be carried out in the ‘Assessment’ process, namely ‘assess indicator’, ‘calculate indicator set result’, ‘collate assessment information’, ‘determine competency profile’, ‘provide evidence’, ‘discuss assessment’, ‘select competency’, ‘select level’ and ‘update competency profile’. The process takes the ‘Competency scope’ output from the ‘Assessment Set-up’ process and uses it to generate the Competency profile(s) for the assessees, along with the other artefacts shown in Figure 4.19. The way that the ‘Assessment Set-up’ process is carried out is shown in the following section.

Figure 4.19 The contents of the Assessment process

Carrying out the Assessment process

Figure 4.20 shows how the ‘Assessment’ process is carried out. The ‘soft boxes’ (rectangles with rounded corners) represent the various activities that have to be carried out and correspond with those shown in the bottom compartment in Figure 4.19. The vertical divisions (swim lanes) indicate which stakeholder role is responsible for carrying out which activity, and correspond to one or more of the stakeholder roles identified in Figure 4.2. The small rectangles containing arrows (known as ‘pins’) show inputs to and outputs from the various activities, with the name of the artefact flowing into or out of the activity shown on the line connecting the pins. These artefacts correspond to those found in the middle compartment in Figure 4.19.

The ‘Assessment’ process is carried out in order to perform an assessment based on a defined ‘Competency Scope’. The output from the ‘Assessment’ process is a ‘Competency Profile’ showing the results of the assessment against the ‘Competency Scope’.

Figure 4.20 shows that the ‘Assessment’ process begins with the ‘Primary Assessor’ executing the ‘collate assessment information’ activity. This is carried out to ensure that all the necessary information needed to carry out the assessment is available, such as the ‘Competency Scope’ against which the assessment is to be based and ‘Record Sheet’ on which the results of the assessment are to be recorded as the assessment proceeds. Once everything is ready, the ‘Primary Assessor’ executes the ‘select competency’ activity to choose the next competency to be assessed, followed by the ‘select level’ activity to choose the level to which the competency is to be assessed. These are based on information taken from the ‘Competency Scope’.

Figure 4.20 Carrying out the Assessment process

The ‘Primary Assessor’ then executes the ‘assess indicator’ activity during which he engages with the ‘Assessee’ in order to establish whether the ‘Assessee’ can present evidence to show that he has met the requirements of the indicator for the chosen competency and level. The ‘Assessee’ conducts the ‘provide evidence’ activity in order to present such evidence. The ‘Primary Assessor’ records a pass or fail for the indicator – the ‘Indicator Result’. This activity is repeated for all the indicators of the selected competency-level combination.

When all the indicators have been covered, the ‘Secondary Assessor’ executes the ‘calculate indicator set result’ activity to convert the pass/fails to a simple percentage. For example, if a competency has five indicators at the chosen level and three are recorded as being passed and two as failed, then the ‘Secondary Assessor’ would calculate a percentage of 60 per cent for this competency level. This is the ‘Indicator Set Result’ for the competency and level.

If there are more levels to be assessed for the currently selected competency, then the process returns to ‘select level’ where it repeats for the next level. If all the levels for the selected competency have been covered, then the ‘Secondary Assessor’ executes the ‘update competency profile’ activity. This updates the incomplete ‘Competency Profile’ with the results for the selected competency, based on the ‘Indicator Set Result’ values for each level of assessment.

If there are more competencies to be assessed, as determined by the contents of the ‘Competency Scope’, then the process returns to the ‘select competency’ activity where it repeats for the next competency. If there are no more competencies to be assessed, then the ‘Secondary Assessor’ executes the ‘determine competency scope’ activity to produce a complete, but not yet finalised, ‘Competency Profile’.

This ‘Competency Profile’ is discussed between the ‘Primary Assessor’ and ‘Secondary Assessor’ until agreement is reached on the proposed final, complete ‘Competency Scope’. This gives both assessors the chance to discuss any issues that they may have with any of the results so that a ‘Competency Scope’ can be produced that they both are satisfied accurately reflects the outcome of the assessment. This ‘Competency Scope’ and the ‘Record Sheet’ used to capture the results during the assessment are the main outputs from the process.

It must be noted here that, of all the core UCAM processes, Figure 4.20 and the descriptive text above is the most ‘theoretical’ in that the diagram and text show an ordering of activities that, while they would work, would lead to a very stilted assessment being conducted. This is an inherent problem in trying to model something as flexible and often seemingly chaotic as human interaction. It must never be forgotten that the ‘Assessment’ process is meant to be carried out in as natural and non-threatening a manner as possible, something that it is almost impossible to capture easily in such a diagram.

Figures 4.19 and 4.20 show rather a large number of artefacts for the ‘Assessment’ process. The various artefacts of the process and their relationships are discussed further in the following section.

Artefacts of the Assessment process

The main output from the ‘Assessment’ process is the ‘Competency Profile’ for an assessee, generated based on the ‘Competency Scope’ and the results of the assessment. The artefacts of the ‘Assessment’ process, together with their relationships, are shown in Figure 4.21.

The ‘Record Sheet’ is used to record the results of the assessment by capturing each ‘Indicator Result’ for an ‘Indicator’ (basically a pass or fail along with a note of why), based on the ‘Evidence’ presented for the ‘Indicator’. Each ‘Indicator Result’ for a given competency and level is grouped into an ‘Indicator Result Set’. A piece of ‘Evidence’ must be one of the ‘Evidence Type(s)’ that is defined as being acceptable to demonstrate that an ‘Indicator’ has been met. The ‘Record Sheet’ will contain each ‘Competency’ and each associated ‘Indicator’ taken from the ‘Competency Scope’ that forms one of the main inputs to the ‘Assessment’ process. The ‘Record Sheet’ is represented in Figure 4.21 by the package in the middle of the diagram labelled, appropriately, ‘Record Sheet’. A (very) small part of an example ‘Record Sheet’ is shown in Table 4.3.

Figure 4.21 Relationships between the artefacts of the Assessment process

Table 4.3 shows the part of a completed ‘Record Sheet’ for a single competency and level (here, level 1 of the ‘Systems concepts’ competency). It shows each ‘Indicator’ for the competency, the acceptable ‘Evidence Type’ for each ‘Indicator’ and the actual ‘Evidence’ presented. A pass or fail result is noted for each ‘Indicator’ and a percentage calculated is based on these for the entire ‘Indicator Set’. The ‘Rating Scheme’ in use is also shown, and the ‘Level Rating’ based on the percentage of 50 per cent is highlighted. Of course, the full ‘Record Sheet’ for an assessment will be much longer, containing similar information for each competency and level assessed.

The contents of the ‘Record Sheet’, along with the ‘Competency Scope’, are used to generate the ‘Competency Profile’ for the assessee, showing the ‘Actual Level’ at which each competency is held, together with any ‘Note(s)’ deemed relevant. An example ‘Competency Profile’ is shown in Figure 4.22.

Figure 4.22 shows an example ‘Competency Profile’ based on the ‘Competency Scope’ example shown earlier in Figure 4.22. This scope is shown on the diagram by the thick line that indicated the level to which each competency was assessed.

Table Table 4.3 Example of a completed ‘Record Sheet’ (partial) showing ‘Rating Scheme’
Competency reference: ‘System concepts’, level 1Rating scheme
IndicatorEvidence typeEvidencePass/Fail% rangeLevel rating
Is aware of system life cycleInformal course, tacit knowledgeFormal course certificatePass81%–100%Fully met
Is aware of hierarchy of systemsInformal course, tacit knowledgeNo evidenceFail56%–80%Largely met
Is aware of system contextInformal course, tacit knowledgeNo evidenceFail 11%–55% Partially met
Is aware of interfacesInformal course, tacit knowledgeInformal course certificatePass0%–10%Not met
Rating % 50%

For each competency and level that was assessed, the ‘Level Rating’ is shown in the corresponding cell. The ‘Level Rating’ is based on the ‘Indicator Result Set’ for the competency and level as determined by the ‘Rating Scheme’. Thus, it can be seen that the assessee has ‘Not met’ the ‘Validation’ competency at any of the assessed levels, but has ‘Fully met’ the ‘Modelling and simulation’ at both ‘Awareness’ and ‘Supervised practitioner’ levels and has ‘Partially met’ it at ‘Practitioner’ level, the highest level to which it was assessed. Where a competency is ‘Fully met’, the cell has been shaded to emphasise the ‘Actual Level’ that has been reached for a competency and to make clearer the gaps, if there are any, between the ‘Competency Profile’ and the source ‘Competency Scope’.

Summary of the Assessment process

The ‘Assessment’ process is executed to carry out an assessment. An assessment is carried out against a ‘Competency Scope’, with the results captured on a ‘Record Sheet’. Using the defined ‘Rating Scheme’, these results are converted to a ‘Competency Profile’ for the assessee, showing how the assessee rates against the defined scope.

Discussion on the Assessment process

As noted above in the description of the ‘Assessment’ process, it is difficult to represent something as complex as human interaction on a simple diagram. It was also noted that any assessment should be carried out in as natural and

Figure 4.22 Example ‘Competency Profile’

non-threatening a manner as possible. With this in mind, there are some points about the process that should be considered in order to help it run smoothly:
  1. Experience has shown that attempting to carry out a competency assessment in a reasonable time and in a manner that ensures repeatability and as much objectivity as possible is very difficult with a single assessor. For this reason it is recommended that assessments are always carried out with both the primary and secondary assessor roles.

  2. Assessors should always introduce themselves at the start of the process. The primary assessor should take the lead on this, and should explain to the assessee the reason for the assessment and the way in which the assessment will be carried out. The roles of the two assessors should be explained.

  3. The primary assessor should concentrate on ensuring that the assessment flows in as smooth a manner as possible. In practice, the assessor will never be as mechanical in approach as suggested by the diagram in Figure 4.20, but will choose competencies and levels based on the responses of the assessee. The assessment should not simply be a list of questions on one indicator after another, but should be as free-flowing and natural as possible (within the time set aside for the assessment). Doing this and recording results and capturing the evidence presented is almost impossible and this is why the secondary assessor is needed. The secondary assessor should concentrate on this recording of results but can also ask questions if it is felt that the primary assessor has missed any areas of the scope being assessed.

  4. In order to ensure that the assessment is as free-flowing as possible, it is essential that both the primary and secondary assessors are familiar with the competency frameworks being used as the basis of assessment, and also, if possible, with the domain areas being covered by the assessment. In this way, they are not constantly having to look through and read the scope or other information on the competencies in order to understand what they are assessing and can also understand the responses given by the assessee when determining the pass or fail for a given indicator. At the very least, it is recommended that the primary assessor be familiar with the domain being covered by the assessment. To this end, examples of competency profiles for the assessor roles are provided later in this chapter.

  5. At the end of the assessment the primary assessor should explain to the assessee what happens next. In practice, it has been found that the assessee should leave the room once all the competencies and levels have been covered. The ‘determine competency profile’ and ‘discuss assessment’ activities in Figure 4.20 should not be conducted with the assessee present. In addition, the ‘determine competency profile’ activity might actually be carried out after the assessment. This is certainly the most effective method where a large number of assessments have to be made. It is often better to leave the assessment with an agreed ‘Record Sheet’ for the assessee and determine the profile later. Whether the ‘Record Sheet’ is made available to the assessee is something that should be decided prior to conducting the assessment, but in the authors’ experience it is better not to include this with the ‘Competency Profile’ that is given to the assessee.

For a competency assessment to be successful, the competency of assessors has to be ensured in terms of their domain knowledge of both the industry in which assessees work and in terms of the frameworks that are being used for the assessment. There should always be at least two assessors to ensure that the assessment flows smoothly and assessors should ensure that the assessment is non-intimidating and non-threatening. It should be a conversation and discussion with the assessee and not an interrogation that simply works through a checklist of questions.

Assessments can take an unexpected amount of time to conduct, even excluding any pre-assessment preparation on the part of the assessees (and assessors). As was noted previously, experience has shown that to assess against seven competencies to an average of level three typically takes around three hours. It is therefore essential that sufficient time is allocated for an assessment and that the competency scopes used are practical. If the assessees have some familiarity with the contents of the framework prior to an assessment, this can help ensure that it runs in a timely fashion. Having all supporting evidence to hand also makes the assessors’ roles easier when it comes to deciding whether the necessary evidence has been presented to support a competency.

Finally, it should be explained to assessees that competency levels may go down as well as up. This is not a bad thing but simply reflects the changes in competencies as an assessee’s career progresses and roles change. Someone working in the software industry who previously spent all day developing software in a particular programming language would be expected to be a ‘practitioner’ (if not even an ‘expert’) in appropriate competencies relating to that language and associated techniques. If, five years later, that same person is now managing a software project, and hasn’t developed software using that language for a number of years, then their level of competency will have dropped (although it could probably be quickly brought back up to the previous level if they again had to actively develop software). This doesn’t mean that they have now become incompetent; just that their roles have changed. They will hold other competencies at higher levels than they did previously and are even likely to hold completely different competencies reflecting their new roles. Of course, if competencies drop over time for an assessee whose role has not changed, then this is an indication of a problem with their competency to do their job.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset