The short version

  • After submission, your proposal enters a structured evaluation process with two to four independent evaluators, a scoring framework, and a moderation panel.
  • Evaluators score against published criteria, not general impressions. If your response to a requirement is vague or missing, the score for that section is low regardless of how strong the rest of the proposal is.
  • Most evaluation panels use a consensus moderation step where individual scores are compared and debated before a final score is agreed.
  • The evaluation criteria and weightings are almost always published in the RFP. If you have not read them carefully, you are competing blind.
  • Understanding the evaluation process gives you a concrete advantage: you can structure your proposal to make the evaluator’s job easier, which directly affects your score.

Most people who write proposals have never evaluated one. They have never sat on the other side of the table, working through a stack of submissions with a scoring matrix, a cup of tea that went cold two hours ago, and a deadline to get their individual scores submitted before the moderation meeting.

This is a problem, because the evaluation process is not a mystery. It follows a defined structure, and that structure tells you almost everything you need to know about how to write a proposal that scores well. If you understand how evaluators work, you can write for them. If you do not, you are guessing.

Here is what actually happens to your proposal once you hit send.

Step one: compliance check

Before anyone reads a word of your content, your submission goes through a compliance check. This is a pass/fail gate. Someone, usually from the procurement team rather than the evaluation panel, checks whether your proposal meets the stated submission requirements.

They are looking for specifics. Did you submit in the correct format? Did you include all mandatory documents? Did you stay within stated page or word limits? Did you sign and return all required declarations? Is the pricing in the template provided?

If you fail the compliance check, your proposal may be rejected outright. It does not matter how strong your content is. The evaluators will never see it.

This is why experienced bid managers treat the submission checklist as a non-negotiable process step, not something to rush through at the last minute. One missing signature or an appendix in the wrong format can end your bid before the evaluation even begins.

Step two: individual scoring

Once your proposal clears compliance, it is distributed to the evaluation panel. In most formal procurements, this panel consists of two to four evaluators. Each evaluator receives the same set of submissions and the same scoring framework.

The scoring framework is built from the evaluation criteria published in the RFP. Each criterion has a weighting, and each response is scored against a defined scale. A common scale runs from zero to five, where zero means the response did not address the requirement at all and five means it demonstrated clear evidence of exceeding the requirement with no weaknesses.

Evaluators score independently. They do not discuss submissions with each other during this phase. Each evaluator reads your proposal, finds the section that responds to each criterion, and assigns a score based on the evidence you provided.

This is the critical point most proposal writers miss. Evaluators are not reading your proposal like a novel. They are not forming a general impression. They are working through a matrix, criterion by criterion, looking for specific evidence that you can do what they asked. If they cannot find your response to a requirement, or if your response is vague, that criterion scores low. It does not matter that you answered a different question brilliantly three pages earlier.

Pro tip: Structure your proposal to mirror the evaluation criteria. If the RFP lists ten requirements, your proposal should have ten clearly labelled sections that respond to each one in order. Make the evaluator’s job easy and they will reward you for it.

Step three: evidence and examples

Evaluators are trained to score evidence, not claims. There is a significant difference between “we have extensive experience in this area” and “we delivered a comparable project for [client] in 2024, migrating 1.2 million records in eight weeks with zero data loss.”

The first statement is an assertion. The second is evidence. Evaluators give higher scores to responses that include specific, relevant examples with measurable outcomes. They are looking for proof that you have done what you say you can do, ideally in a context similar to what the buyer is asking for.

Case studies, client references, named team members with relevant qualifications, and quantified outcomes all count as evidence. General statements about capability, no matter how confidently written, do not.

This applies equally to methodology questions. If the RFP asks how you will approach the work, evaluators want to see a concrete plan with stages, timelines, and deliverables. “Our proven methodology ensures successful delivery” is not a methodology. It is a sentence that contains the word methodology.

Step four: moderation

After all evaluators have completed their individual scoring, the panel convenes for moderation. This is where individual scores are compared, discussed, and reconciled into a final consensus score for each criterion.

Moderation exists to reduce bias and ensure consistency. If one evaluator scored a section at two and another scored it at four, the panel discusses why. They go back to the submission, review the evidence, and agree on a score that both can justify against the scoring guidance.

This is an important safeguard, and it means that outlier scores, whether generous or harsh, rarely survive into the final result. What matters in moderation is whether the evidence in your proposal supports the score. If your response is clear, well-evidenced, and directly addresses the criterion, the moderation discussion is short and the score holds. If your response is ambiguous, evaluators may interpret it differently, and the moderation discussion becomes a negotiation that rarely lands in your favour.

Step five: scoring and ranking

Once moderation is complete, the weighted scores are calculated. Each criterion score is multiplied by its weighting, and the results are summed to produce a total score for each submission.

In many procurements, price is evaluated separately and combined with the quality score using a predetermined ratio. A common split is 60% quality and 40% price, though this varies. The RFP will state the ratio.

The submissions are then ranked by total score, and the highest-scoring bidder is typically recommended for contract award, subject to any standstill period or governance approvals required.

This is the moment where the weightings published in the RFP become critically important. If technical approach is weighted at 30% and team experience at 10%, a strong technical response contributes three times as much to your total score as a strong team section. Allocating your effort in proportion to the weightings is one of the simplest and most effective things you can do.

What this means for how you write

Understanding the evaluation process changes how you approach proposal writing. You are not writing to impress. You are writing to score.

That means:

  • Mirror the structure. If the RFP lists requirements in a numbered sequence, respond in the same sequence. Do not make the evaluator hunt for your answers.
  • Lead with evidence. Every claim should be supported by a specific example, a measurable outcome, or a named reference. If you cannot evidence it, reconsider whether it adds value.
  • Respect the weightings. Allocate your word count and effort in proportion to the published weightings. A section worth 5% of the total score does not need three pages.
  • Be explicit. If the requirement asks for three things, answer all three. Do not assume the evaluator will infer your capability from a related answer elsewhere.
  • Write for skim-reading. Evaluators reading their fifth submission of the day will not parse dense paragraphs looking for your key point. Use headings, bullet points, and bold text to make your evidence findable.

The evaluation process is not designed to catch you out. It is designed to identify the supplier who best meets the buyer’s requirements, based on evidence. If you write with that process in mind, you give yourself the best possible chance of being that supplier.