When crafting your evaluation criteria, consider the following guidelines to ensure effective and meaningful assessments:

Be Specific and Focused: Clearly define the quality or behavior you want to evaluate. Avoid vague statements. Focus on a single aspect per criterion to maintain clarity.

  • Example: Instead of “good,” use “a friendly and encouraging tone.”

Use Clear Direction: Begin your criteria with an explicit directive such as "Reward responses that..." or "Penalize responses where...".

  • Example: "Reward responses that use empathetic language when addressing user concerns."

Monotonic or Appropriately Qualified Qualities: Ideally, the quality you’re assessing should be monotonic, i.e. more of the quality is better (for rewards) or worse (for penalties). However, when dealing with non-monotonic qualities (where more is not always better), use qualifiers such as “appropriate” to ensure that higher scores represent better adherence to the desired quality.

  • Example: Instead of "Reward responses that are polite" which can become excessive, use "Reward responses that use an appropriate level of politeness" ensuring that the response is polite but not overly so.

Avoid Conjunctions: Focus on one quality at a time. Using conjunctions like “and” might indicate multiple qualities, which can lead to poorly defined behavior if one of the two qualities is poorly adhered to.

  • Example: Instead of "The assistant should be concise and informative" split into separate criteria.

Avoid LLM Keywords: Composo’s reward model is finetuned from LLM models trained in conversation format. Avoid alternate definitions of ‘User’ and ‘Assistant’ that might conflict with LLM keywords ‘user’ and ‘assistant’

  • Example: Instead of "Reward responses that comprehensively address the User Question", rename the ‘User Question’ in your prompt and consider "Reward responses that comprehensively address the Target Question"

Domain-Specific: Domain expertise is your secret weapon in getting good evaluation quality. Injecting your own domain knowledge and understanding of what a ‘good’ answer is improves the leverage your evaluation model has over the generative model.

Qualifiers (Optional): If the criterion applies only to certain situations, include a qualifier starting with “if” to specify when it should be applied.

  • Example: "Reward responses that provide code examples if the user asks for implementation details"
[Direction] responses [quality] [qualifier (optional)].

Components:

  • Direction: “Reward” or “Penalize”.
  • Quality: The specific property or behavior to evaluate.
  • Qualifier (Optional): An “if” statement specifying conditions.

Example Criteria:

  • "Reward responses that provide a comprehensive analysis of the code snippet"
  • "Penalize responses where the language is overly technical if the response is for a beginner"
  • "Reward responses that use an appropriate level of politeness"

When crafting your evaluation criteria, consider the following guidelines to ensure effective and meaningful assessments:

Be Specific and Focused: Clearly define the quality or behavior you want to evaluate. Avoid vague statements. Focus on a single aspect per criterion to maintain clarity.

  • Example: Instead of “good,” use “a friendly and encouraging tone.”

Use Clear Direction: Begin your criteria with an explicit directive such as "Reward responses that..." or "Penalize responses where...".

  • Example: "Reward responses that use empathetic language when addressing user concerns."

Monotonic or Appropriately Qualified Qualities: Ideally, the quality you’re assessing should be monotonic, i.e. more of the quality is better (for rewards) or worse (for penalties). However, when dealing with non-monotonic qualities (where more is not always better), use qualifiers such as “appropriate” to ensure that higher scores represent better adherence to the desired quality.

  • Example: Instead of "Reward responses that are polite" which can become excessive, use "Reward responses that use an appropriate level of politeness" ensuring that the response is polite but not overly so.

Avoid Conjunctions: Focus on one quality at a time. Using conjunctions like “and” might indicate multiple qualities, which can lead to poorly defined behavior if one of the two qualities is poorly adhered to.

  • Example: Instead of "The assistant should be concise and informative" split into separate criteria.

Avoid LLM Keywords: Composo’s reward model is finetuned from LLM models trained in conversation format. Avoid alternate definitions of ‘User’ and ‘Assistant’ that might conflict with LLM keywords ‘user’ and ‘assistant’

  • Example: Instead of "Reward responses that comprehensively address the User Question", rename the ‘User Question’ in your prompt and consider "Reward responses that comprehensively address the Target Question"

Domain-Specific: Domain expertise is your secret weapon in getting good evaluation quality. Injecting your own domain knowledge and understanding of what a ‘good’ answer is improves the leverage your evaluation model has over the generative model.

Qualifiers (Optional): If the criterion applies only to certain situations, include a qualifier starting with “if” to specify when it should be applied.

  • Example: "Reward responses that provide code examples if the user asks for implementation details"
[Direction] responses [quality] [qualifier (optional)].

Components:

  • Direction: “Reward” or “Penalize”.
  • Quality: The specific property or behavior to evaluate.
  • Qualifier (Optional): An “if” statement specifying conditions.

Example Criteria:

  • "Reward responses that provide a comprehensive analysis of the code snippet"
  • "Penalize responses where the language is overly technical if the response is for a beginner"
  • "Reward responses that use an appropriate level of politeness"