A human-centered approach is the cornerstone of ethical AI. To implement this approach:
- Start by outlining the risks you want to avoid in terms of bias, transparency, explainability, fairness, toxicity, hallucinations and all the other dimensions that make up Responsible AI.
- Define the metrics you will use to measure the presence of these risks.
- Measure the developed models against these metrics to ensure their reliability and trustworthiness. For example, is the generated content compliant with the bias metrics.
- You might develop custom-made algorithms that provide a layer of explainability about the models’ output.
- You can even develop an analytical engine that monitors the ML pipeline and the compliance with these metrics.
Interested in learning more?
Check out this 9 minute demo that covers MLOps best practices for generative AI applications.
View this webinar with QuantumBlack, AI by McKinsey covers the challenges of deploying and managing LLMs in live user-facing business applications.
Check out this demo and repo that demonstrates how to fine tune an LLM and build an application.