Common questions about Scorecard’s AI evaluation platform
What is an eval/evaluation?
How does Scorecard differ from other AI evaluation tools?
What programming languages does Scorecard support?
Can Scorecard be used for RLHF and agent training?
What are the text limits in Scorecard?
Are there rate limits for API usage?
What is the playbook text limit?
How does metadata work in Scorecard?
How is latency measured and reported?
What types of AI systems can Scorecard evaluate?
How does Scorecard handle sensitive data and privacy?
Can I run Scorecard evaluations offline or on-premise?
How do I migrate from other evaluation tools?
How does Scorecard pricing work?
What counts as a score?