Assessment Workflows
Learn how to create and configure your first assessment workflow in Highlighter through this comprehensive tutorial.
Overview
This tutorial will guide you through creating a complete assessment workflow from start to finish. You'll learn how to set up data sources, configure assessment stages, assign reviewers, and monitor progress.
Prerequisites
Before starting this tutorial, ensure you have:
- Access to a Highlighter platform account
- Administrator or workflow creator permissions
- Sample data files to work with
- Basic understanding of your assessment requirements
Tutorial Goals
By the end of this tutorial, you will have:
- Created a functional assessment workflow
- Configured both machine and human assessment stages
- Set up data routing and quality controls
- Launched your first workflow order
- Monitored workflow performance
Step 1: Planning Your Workflow
Define Your Objectives
Before creating the workflow, clearly define:
- Purpose: What are you trying to assess or evaluate?
- Input Data: What type of data will you be processing?
- Output Goals: What decisions or classifications do you need?
- Quality Standards: What level of accuracy is required?
Example Scenario
For this tutorial, we'll create a workflow that:
- Processes uploaded images for quality assessment
- Uses AI to detect potential issues automatically
- Routes questionable images to human reviewers
- Generates final approval/rejection decisions
Step 2: Creating the Workflow
Access the Workflow Builder
- Log into your Highlighter platform
- Navigate to the "Workflows" section
- Click "Create New Workflow"
- Choose "Assessment Workflow" as the template type
Basic Workflow Configuration
- Workflow Name: "Image Quality Assessment"
- Description: "Automated quality assessment of uploaded images with human review escalation"
- Category: Quality Control
- Priority: Normal
Data Source Configuration
- Input Source: Select your data source (file upload, database, API)
- File Types: Specify accepted formats (JPG, PNG, etc.)
- Size Limits: Set reasonable file size constraints
- Validation Rules: Configure basic data validation
Step 3: Configuring Assessment Stages
Stage 1: Data Intake and Validation
- Stage Name: "Data Validation"
- Stage Type: Automated Processing
- Functions:
- File format verification
- Size and resolution checks
- Basic quality metrics
- Duplicate detection
Configuration:
- Set timeout limits for processing
- Configure error handling for invalid files
- Define quality thresholds for automatic rejection
Stage 2: Machine Assessment
- Stage Name: "AI Quality Analysis"
- Stage Type: Machine Assessment
- Agent Selection: Choose or configure your AI quality assessment agent
- Processing Parameters:
- Batch size: 10 images
- Confidence threshold: 85%
- Processing timeout: 30 seconds per image
Quality Gates:
- High confidence (>90%): Auto-approve
- Medium confidence (70-90%): Route to human review
- Low confidence (<70%): Flag for detailed review
Stage 3: Human Review
- Stage Name: "Expert Review"
- Stage Type: Human Assessment
- Review Type: Classification and Quality Control
- Reviewer Assignment:
- Assign to "Quality Reviewers" group
- Set workload distribution to "Even"
- Configure backup reviewers
Review Configuration:
- Review interface: Image annotation tool
- Decision options: Approve, Reject, Request Clarification
- Time limit: 2 minutes per image
- Quality requirements: 95% accuracy
Step 4: Workflow Logic and Routing
Decision Points
Configure routing logic between stages:
-
After Data Validation:
- Valid files → AI Quality Analysis
- Invalid files → Automatic rejection with notification
-
After AI Assessment:
- High confidence approval → Final approval
- Medium confidence → Human review
- Low confidence/rejection → Human review with priority flag
-
After Human Review:
- Approved → Final approval
- Rejected → Final rejection with feedback
- Clarification needed → Return to previous stage
Quality Control Measures
- Audit Sampling: 10% of auto-approved items undergo human review
- Consensus Reviews: Difficult cases reviewed by multiple experts
- Escalation Rules: Complex cases escalated to senior reviewers
- Feedback Loops: Human decisions used to improve AI performance
Step 5: Testing Your Workflow
Validation Testing
-
Test Data Preparation:
- Gather representative sample data (20-30 files)
- Include edge cases and challenging examples
- Prepare expected outcomes for comparison
-
Workflow Testing:
- Run test data through the workflow
- Verify each stage functions correctly
- Check routing logic and decision points
- Validate output quality and format
-
Performance Testing:
- Test with larger data volumes
- Monitor processing times and resource usage
- Identify potential bottlenecks
- Verify scalability under load
User Acceptance Testing
-
Reviewer Training:
- Train human reviewers on new workflow
- Provide guidelines and examples
- Conduct practice sessions
- Address questions and concerns
-
Feedback Collection:
- Gather input from all stakeholders
- Test user interfaces and tools
- Identify usability improvements
- Refine processes based on feedback
Step 6: Launching Your Workflow
Deployment Preparation
-
Final Configuration Review:
- Verify all settings and parameters
- Double-check reviewer assignments
- Confirm quality thresholds
- Test notification systems
-
Documentation:
- Create user guides for reviewers
- Document workflow processes
- Prepare troubleshooting guides
- Set up training materials
Go-Live Process
-
Pilot Launch:
- Start with limited data volume
- Monitor closely for issues
- Make adjustments as needed
- Gradually increase volume
-
Full Deployment:
- Scale up to production volumes
- Monitor performance metrics
- Collect user feedback
- Optimize based on real-world usage
Step 7: Monitoring and Optimization
Performance Metrics
Monitor these key indicators:
- Throughput: Items processed per hour/day
- Quality: Accuracy rates and error percentages
- Efficiency: Average processing time per stage
- User Satisfaction: Feedback from reviewers and stakeholders
Regular Optimization
-
Weekly Reviews:
- Analyze performance metrics
- Identify bottlenecks or issues
- Review quality outcomes
- Collect user feedback
-
Monthly Optimization:
- Adjust thresholds and parameters
- Retrain AI models with new data
- Update reviewer assignments
- Implement process improvements
-
Quarterly Assessment:
- Comprehensive workflow review
- Cost-benefit analysis
- Strategic alignment check
- Planning for future enhancements
Troubleshooting Common Issues
Low Throughput
- Check for processing bottlenecks
- Verify adequate reviewer capacity
- Optimize batch sizes and timeouts
- Consider parallel processing options
Quality Issues
- Review and update assessment criteria
- Provide additional reviewer training
- Adjust AI model thresholds
- Implement additional quality controls
User Adoption Challenges
- Gather detailed user feedback
- Provide additional training and support
- Simplify complex processes
- Improve user interface design
Next Steps
After completing this tutorial:
- Explore Advanced Features: Learn about complex routing, multi-agent systems, and advanced analytics
- Integration: Connect your workflow with existing business systems
- Scaling: Expand to handle larger volumes and more complex scenarios
- Optimization: Continuously improve performance and quality
Additional Resources
- Advanced Workflow Configuration: Learn about complex scenarios
- Agent Management: Deep dive into AI agent configuration
- Performance Optimization: Advanced techniques for scaling
- Integration Guides: Connect with external systems
Congratulations! You've successfully created your first assessment workflow. This foundation will serve you well as you build more sophisticated assessment systems in Highlighter.