Having prepared the agenda and scripts there is still a lot of work to do in actually organising the event. The scripts must be organised into a timetable. This should allow time for assessors to sum up their thoughts after each session and ideally to have a follow up session at the start or end of each day to pick up on issues from previous sessions.
If you are intending to run parallel sessions, the supplier needs to know this in good time to arrange for people to cover each session. Don’t underestimate how tiring the sessions will be for the assessors – the job demands a great deal of concentration and regular breaks will be required to maintain attention.
It is a good idea to have briefings with both assessors and suppliers before the event to set out the ground rules for the evaluation. Some of the points you may wish to cover include:
You need to provide the suppliers with some form of evaluation event pack that they can consider before attending the briefing. This should contain:
- The test scripts
- Copies of any test data they need to set up
- Any relevant background information about the project.
This is the last chance for the supplier to ensure they understand your requirements before they demonstrate their product to you. A good supplier will have thought about the contents of the pack and come to the briefing prepared with a list of questions. Frequently they will take the opportunity to negotiate about the timetable ie they want to spend more time on this and less on that.
Our advice is “stay in the driving seat” – you have already thought about what is important to you and what you want to see. You may agree that some of the changes are reasonable but take care to ensure you are being fair to all of the suppliers involved.
In a tightly managed evaluation timing is of critical importance and it is in everyone’s interest to ensure that sessions do not overrun. You will not be in a position to make a fair comparison between two suppliers if one of them spent half an hour on a topic while the other demonstrated for two hours.
You need to beware of suppliers trying to emphasise the best features of their product at length (these may not necessarily be the most important features to you) then skate over weaknesses due to “lack of time“.
Similarly, you need to manage the input of your own evaluation team. If your scripts are well thought out and prepared (and the supplier is well prepared) the demonstration should give the team all the answers they need and there should be little need for ad hoc questioning. This isn’t always the case and you need to watch out for the risk of sessions being “hijacked” by someone with a particular interest in one area.
Thorough briefing of suppliers and evaluators will help but it is worth appointing someone to facilitate each session or at the very least to keep an eye on the time and ensure you are getting through the demonstration at the expected rate. Ideally this role should be undertaken by someone who isn’t scoring the session so they can give their full attention to the job. This person can also be helpful in picking up issues where suppliers do not stick rigidly to the order of the script and need to come back to it later.
The facilitator/timekeeper should have sufficient confidence to check with the team that a point has been adequately covered and move the supplier along or request further explanation where necessary.
Where your team has a lot of areas to evaluate you may wish to consider making a recording of the sessions. This can be helpful if you simply can’t remember the answer to a particular question or if there are differing interpretations of what was said. It can also help to tone down some of the more exuberant and optimistic sales promises if the supplier knows you have a full record of the discussion.
It is worth also appointing a team leader who is more of a functional expert to oversee the progress of the actual evaluation. This person will need to collect score sheets at the end of each session and do a rough check that there are no missing or anomalous results. Issues to look out for are where a number of people have failed to evaluate a point due to insufficient information or where the scores of individual team members differ greatly.
This can highlight areas which should be followed up in the recap/follow-up sessions.
The team leader may also facilitate the final summing up of the team scores. There are bound to be some genuine and valid differences of opinion about the different products and rather than take a purely quantitative approach and average out scores it is worth exploring the reasons for these differences. A simple average can give a compromise solution that isn’t a best fit in any area.
This facilitation role may equally well be carried out by an objective outsider provided they have facilitation skills and a reasonable level of subject knowledge. In practice most projects are unable to draw on an unlimited number of people and find it easier to use a team member.
You need to establish what scoring mechanism you will use for the evaluation to ensure consistency between evaluators. This may be a simple numeric score eg marks out of ten for each area but, as mentioned above, you need to consider the risk that a purely quantitative approach may smooth over some important issues.
Here is an example of a qualitative scoring system. This example works on the basis that a requirement is either “Met” or “Not Met”. There will however inevitably be grey areas which fall into the category “Partly Met”. It may be that a supplier tells you that this functionality is being developed and will be in a future release of the product or it may be that by changing your processes the system could achieve the desired output. In any major systems purchase there are likely to be many of these grey areas and you have to decide how important the gaps are and how you will compensate for them. It is these areas which may make or break your implementation project and you need to have a clear view of them before you can develop an effective implementation plan or set a realistic budget.
A suggested method of scoring is for each individual to complete an individual score sheet which is then input to a spreadsheet so that scores can be compared. Where all of the team members agree the score this can be taken as it stands. Where there are differences these need to be discussed and a final score agreed.
NB Where a large number of points are being scored it is possible to speed things up by automating the comparison process. This can be achieved in most spreadsheet packages by use of a simple macro.
Individual team members in the appropriate discipline areas covered by a system selection project complete test script score sheets at the actual supplier demonstration.
The scores and comments of individual team member score sheets are then entered into a summary spreadsheet.
The scores for individual suppliers are entered on to separate datasheets within the template – so supplier one appears on datasheet one, supplier two on datasheet two, etc. Team members are given numbers one, two, three, etc but these would usually be replaced with the members’ respective initials.
The datasheets record scores against each individual task set for the test script process – using a pre-determined scoring definition. A space should be left where no score has been awarded.
The information entered into the datasheets can then be seen in the specific discipline areas elsewhere in the template eg see worksheets for accommodation, general requirements, etc. Data should not be entered directly into these areas as they are formatted to reflect the information in the datasheets.
The information entered is also automatically used to produce graphical representations of the results in order to illustrate the strengths and weaknesses of individual suppliers when compared with other suppliers in the selection process. The graphs can be seen in the worksheets entitled “handout graphs” and “graphs” on the template.