Skip to main content

Generating and Editing Answers

Updated today

How Questionnaire Automation Works

How answers are generated

[Step-by-step Flow]

  • Choose your settings

  • Watch Answers

Uploading Different Formats

  • Excel

    • Dropdowns

  • Word

    • Checkboxes

  • PDF

  • CSV

  • Web Portals

When 1up cannot answer

  • IDK responses

  • Skipped questions

  • PII

Why doesn’t 1up provide confidence scores?

Large language models (LLMs) struggle with confidence scoring. This is because LLMs are primarily trained to predict the next word in a sequence based on patterns in the data, not to directly estimate uncertainty or confidence.

1up uses a binary system to determine if a query can or cannot be answered. You might see other products that provide confidence scoring. In our view, these scores are misleading at best. Additionally, an AI model should never be used to grade its own performance. We believe it’s always best to let the reader decide on the quality of a 1up-generated response.

Answer Types

  • Short Answer

  • Long Answer

  • Answer Library

Answer Statuses

  • Skipped

  • Unanswered

  • Partial

  • Full

  • Under Review

  • Approved

Answer Labels

  • 1up-Defined

  • User-Defined

Regenerating answers

  • Individually

  • In Bulk

Reviewing & Editing Answers

As a best practice, we recommend reviewing all responses to make sure they are accurate. Here’s what you can expect for each response:

Related Sources

  • Tag Answers

Save Answer to KB

  • Tag Answers

  • View them in Answer Library

Assigning to Teammates

Comment Stream

Answer History

Troubleshooting Wrong Answers

  • Check the source

  • Teach 1up the right answer

Wrong Sources Used

  • Retry with different tags

  • Delete the source

Incorrectly Extracted Documents

[ placeholder ]

[ open a support ticket ]

Did this answer your question?