The short version

  • Most proposals fail not because the solution is wrong but because the document does not answer what the buyer actually asked.
  • The seven gaps fall into three categories: missing content, weak evidence, and avoidable process errors.
  • Requirement gaps, where a stated criterion goes completely unaddressed, are the single biggest cause of lost scores.
  • Evaluators search for answers to specific criteria rather than reading cover to cover, so a buried response counts as a missing one.
  • A structured review against the original brief catches most of these gaps before submission.

Most people who lose a contract assume the winner had a better solution. Sometimes that is true. More often, the winner had a better document.

The proposals that get rejected or scored poorly are rarely terrible. They are written by capable people who know their work. The problem is not competence. The problem is that the proposal, as a document, does not do the job it was hired to do: answer every question the buyer asked, in a way the evaluator can find and score.

This matters more than most people realise. Industry research consistently shows that average win rates for RFP responses sit between 5% and 40%, depending on the sector and whether the opportunity was qualified before responding. That means the majority of proposals fail. Not because the market is impossibly competitive, but because most submissions contain avoidable errors that the writer never noticed.

After reviewing hundreds of proposals across IT, consulting, construction, and creative services, the same patterns keep showing up. Seven specific types of gap account for the vast majority of lost marks. They fall into three broad categories: missing content (requirement, evidence, and proof gaps), weak communication (clarity, formatting, and differentiator gaps), and process errors (compliance gaps).

Some are obvious in hindsight. Most are invisible to the person who wrote the proposal. All of them are fixable before submission if you know where to look.

1. Requirement gaps: the question you did not answer

A requirement gap is the simplest and most damaging type of failure. The RFP asked a direct question. The proposal did not answer it.

This is not about weak answers. It is about absent ones. A section of the RFP that specified a requirement, and the proposal that simply did not address it at all.

How it happens: A sales manager at a mid-sized IT firm submits a proposal for a £200,000 managed services contract. The RFP lists 14 evaluation criteria across four sections. His team addresses 12 of them thoroughly. The two they miss are data residency and incident response SLAs, both buried in an appendix to the main document. The client scores those sections zero. The overall score drops from a competitive 78% to a losing 64%.

He did not lack the capability. He lacked the coverage.

Requirement gaps happen because people respond to the brief they remember rather than the brief as written. Under time pressure, it is easy to address the big themes and miss the specific criteria listed in an appendix, a supplementary document, or a single line in a dense paragraph. Evaluators do not give partial credit for good intentions. If the requirement is not addressed, the score is zero.

The cost of one missed requirement depends on the weighting, but in a typical evaluation where each criterion carries equal weight across 12-15 questions, a single zero can drop your overall score by 6-8 percentage points. That is usually enough to move you from shortlist to rejection.

2. Evidence gaps: claims without proof

Evidence gaps are subtler than requirement gaps. The proposal addresses the requirement, but the response is a claim rather than a demonstration.

Consider the difference between “We have extensive experience delivering similar projects” and “We delivered a £180,000 network migration for a 200-person financial services firm in Q3 2024, completing three weeks ahead of schedule with zero downtime.” The first is a statement. The second is evidence. Evaluators score evidence.

How it happens: A consulting firm responds to an RFP for organisational change management. The proposal states they have “deep expertise in stakeholder engagement.” The evaluation panel, using a 0-5 scoring framework, gives this a 2 out of 5. A competing firm answers the same question with a named case study, a specific methodology reference, and measurable outcomes. They score 4. The consulting firm did the work. They just did not prove it.

Evidence gaps are particularly common among experienced professionals who assume their reputation precedes them. It does not. The evaluator has your document, the scoring matrix, and a deadline. That is it.

The fix is straightforward but requires discipline: for every claim in your proposal, ask whether the evaluator could verify it from the text alone. If the answer is no, add a specific example, a metric, a date, a client reference, or a named methodology. Evaluators working to a 0-5 scoring scale typically reserve top marks for responses that provide specific, verifiable evidence. A general statement of capability rarely scores above a 3, regardless of how true it is.

3. Compliance gaps: the rules you broke

Compliance gaps are the paperwork failures. Page limits exceeded. Wrong file format. Missing declarations or certificates. Appendices not labelled according to the instructions. These are not about the quality of your thinking. They are about whether you followed the submission rules.

How it happens: A construction firm submits a strong tender for a local authority highways contract. The methodology is solid, the pricing is competitive, and the team has directly relevant experience. The tender instructions require all method statements to be submitted as separate PDF attachments labelled “Appendix A”, “Appendix B”, and so on. The firm submits them as sections within the main document. The evaluation panel, following their own procurement rules, marks the method statements as non-compliant. The tender is not rejected outright, but those sections receive reduced scores because the panel has to locate and cross-reference content that should have been clearly separated.

Compliance failures feel unfair. They are also entirely avoidable. The instructions were in the document. Someone needed to read them and check.

4. Formatting gaps: making evaluators work harder than they should

Formatting gaps do not usually cause outright rejection. They cause friction. An evaluator searching for your answer to criterion 4.3 should not have to read nine pages of continuous prose to find it.

How it happens: An agency founder writes a pitch response as a single flowing narrative. It is well written, persuasive even. But the RFP listed eight specific evaluation criteria, and the evaluator has a spreadsheet with eight rows to fill in. The evaluator reads the entire document, tries to extract relevant content for each criterion, and runs out of patience by row five. Scores drop. Not because the content is bad, but because the evaluator had to dig for it.

The rule is simple: match the structure of your response to the structure of the evaluation. If the RFP numbers its requirements, number your answers. If it uses specific section headings, mirror them. Make the evaluator’s job easy and they will score you fairly. Make them work and they will not score you generously.

This is particularly important in processes where multiple evaluators score independently before a moderation session. If evaluator one finds your answer to criterion 4.3 on page seven and evaluator two misses it entirely, your moderated score will suffer. A clearly structured proposal removes that risk.

Pro tip: Before finalising any proposal, read the evaluation criteria and check that every single criterion has a clearly identifiable answer in your document. If you have to search for it, so will the evaluator.

5. Clarity gaps: saying what you meant to say

Clarity gaps are the cousin of evidence gaps. The proposal addresses the requirement and may even include evidence, but the response is written in a way that leaves the evaluator uncertain about what is actually being offered.

This happens most often with technical content. The writer understands the nuance. The evaluator does not. Sentences run long, qualifications pile up, and the core commitment gets lost in language that protects more than it communicates.

How it happens: A business development director at a consulting firm writes a detailed response to a question about project governance. She covers steering committees, escalation procedures, risk registers, and reporting cadence. The section runs to 600 words. The evaluator, who has 40 proposals to mark, reads it twice and still cannot extract a clear answer to the actual question: how often will you report to us and what will the reports contain? The response scores a 3 out of 5. A competitor’s response is half the length, states the reporting structure in three bullet points, and scores a 5.

Good proposals are not about writing more. They are about writing clearly enough that the evaluator never has to guess what you mean.

6. Differentiator gaps: sounding like everyone else

Most proposals in a competitive process sound remarkably similar. The same methodology frameworks, the same corporate language, the same assertions of quality and commitment. When every proposal reads the same, evaluators have no reason to score one higher than another.

Differentiator gaps occur when a proposal fails to articulate what is specifically different about this team, this approach, or this solution for this particular client.

How it happens: Four IT firms respond to the same managed services RFP. All four describe their service desk, their monitoring tools, and their escalation processes. All four use almost identical language about “proactive monitoring” and “dedicated account management.” The winning firm does something different. They reference the client’s existing infrastructure by name, explain the specific migration risks for that environment, and propose a transition plan tailored to the client’s fiscal year. They score 5 points higher on the methodology section, and that margin wins the contract.

Differentiation does not mean being flashy. It means being specific to the client and the opportunity rather than generic across all proposals.

7. Proof gaps: the evidence that does not fit

Proof gaps are a more specific version of the evidence gap. Here, the proposal includes evidence, but the evidence does not match the requirement closely enough to score well.

A case study from the wrong sector. A reference from five years ago when the RFP asks for recent experience. A team CV that demonstrates seniority but not the specific qualification the evaluation criteria require.

How it happens: A firm responds to a public sector tender requiring evidence of delivering similar projects within the last three years. They include an impressive case study, but it was completed in 2019. The evaluator gives a reduced score because the evidence falls outside the stated timeframe. A competing firm includes a smaller but more recent project and scores higher. Relevance beats impressiveness.

Proof gaps are frustrating because the organisation genuinely has the capability. The wrong proof was selected, and the evaluator scored what was on the page, not what exists in the real world.

Most gaps share a single root cause

These seven gaps look different on the surface, but they share something in common. Every one of them is caused by a disconnect between what the RFP asks for and what the proposal delivers. Not a capability gap. A document gap.

The person writing the proposal almost always has the knowledge, the experience, and the solution to address every requirement. The failure is in translation. The brief says one thing. The proposal says something slightly different. Or it says the right thing in the wrong place. Or it says nothing at all about a requirement the writer assumed was covered elsewhere.

This is why the most experienced proposal writers still lose. Experience makes you faster, but it does not make you immune to gaps. If anything, experienced writers are more prone to the curse of knowledge: they read what they meant to write rather than what is actually on the page. They fill in the blanks automatically, because they know the answer. The evaluator does not.

What catches these gaps before the evaluator does

The solution is not writing better proposals from scratch. It is reviewing proposals systematically against the original requirements before submission. A structured, requirement by requirement comparison between the brief and the response catches most of these gaps before the evaluator ever sees them.

That review can be done manually. Plenty of bid teams do it well, working through a compliance matrix and cross-referencing every criterion. The challenge is time. A thorough manual review of a 30-page proposal against a detailed RFP takes two to three hours. For teams responding to multiple opportunities each month, that adds up quickly.

The bigger problem is that most companies responding to RFPs do not have a dedicated bid team. The person writing the proposal is the person reviewing it, under the same time pressure, with the same blind spots. A sales manager juggling five active deals cannot spend three hours reviewing each proposal. A consultant writing a bid on Thursday evening for a Friday deadline does not have time for a second pair of eyes.

This is the problem ReqFit was built to solve. It reads both documents, maps every requirement to your response, identifies the gaps, and tells you exactly where to focus. Not a rewrite. A structured, honest assessment of where your proposal stands before you submit it.