I recently interviewed Dr Heather Domin, Vice President and Head of Office of Responsible AI and Governance at HCLTech, on the evolving landscape of ethical AI implementation.
As organizations worldwide grapple with balancing innovation and responsibility, Dr Domin offers valuable insights drawn from her extensive experience guiding one of the tech industry’s leading players.
In our wide-ranging conversation, she explains the critical distinction between AI governance and responsible AI practices, outlines essential principles for ethical implementation, and shares her vision for leadership in an AI-driven future.
You’ve previously said that there’s a misconception between governance and responsible AI. Could you elaborate on this distinction and explain why focusing solely on compliance might be insufficient for organisations developing AI systems?
Responsible AI and governance are core building blocks for trust. They are overlapping but with differences in focus, and we need both. Responsible AI focuses on enabling ethical alignment through capabilities such as transparency, explainability, and safety to reduce risk and to benefit both humans and the environment.
On the other hand, AI governance focuses on enabling the appropriate organizational and technical controls, often with an emphasis on aspects such as accountability and compliance with policies and legal requirements.
Organizations that focus solely on compliance risk reputational damage and an opportunity to delight clients, employees and stakeholders. Organizations that take additional steps to align with their core values can gain a sustainable competitive advantage in the market and help ensure their long-term success.
You’ve suggested that CEOs with responsible AI backgrounds may soon emerge. What specific skills and knowledge do you believe these leaders will need to successfully balance innovation with ethical considerations?
CEOs are responsible for creating value for their stakeholders and enabling both the short- and long-term success of their organizations. Knowing how to use AI in a responsible way to drive innovation, productivity, and differentiation can result in a competitive advantage.
Understanding international standards and legal requirements can also help their organizations avoid fines and reputational damage, enabling executives to help steer their organizations towards sustainable gains in the market.
The skills that responsible AI professionals gain through experience by managing risks and engaging in critical thinking about potential impacts and appropriate uses for AI can be useful to individuals in CEO roles as they look to leverage AI in a way that balances innovation and ethical values.
What are the core principles that you advise large businesses and organisations to follow when developing and rolling out AI systems?
The five principles I advise organizations to follow are accountability, fairness, security, privacy, and transparency.
Accountability is about having clear ownership and mechanisms in place to report and address concerns. Fairness focuses on implementing safeguards to enable AI that results in unbiased or equitable outcomes. Security and privacy are essential to protecting AI and its data, including personal and sensitive information. Transparency is key to enabling stakeholders to understand AI and data used within these systems. All these aspects are crucial to enabling responsible AI.
You’ve noted that responsible AI should be integrated into technology development from the start rather than being housed in legal or regulatory functions. What practical steps can organisations take to ensure this integration happens effectively?
Integrating responsible AI into the entire AI lifecycle is critical, and we often refer to this as an ethics by design approach.
Practical steps include establishing clear policies and requirements for development teams, as well as training and guidance to support that policy. Technical training focused on each stage of the AI lifecycle is important; from early phases where the ideas are formed all the way to when an AI system is decommissioned.
It often includes specifying which tools and methods are recommended for developers and data scientists in the organization.
Could you discuss your approach to bias mitigation in AI systems? How do you balance the need to address harmful biases while ensuring AI systems have an appropriate contextual understanding?
It is important for AI systems to be provided with enough information for contextual relevancy, but we must balance this by limiting the inclusion of personal or sensitive information to only what is necessary and appropriate. In some cases, this means that we may need to anonymize data sets or create synthetic data to enable this balance.
What are your thoughts on the growing role of agentic AI in business operations? What safeguards should be implemented to mitigate risks associated with autonomous AI decision-making?
Agentic AI presents tremendous opportunities for businesses to expand and improve business operations. For example, through increased productivity when agents work together with humans, and improved experiences through faster response times and personalization.
At the same time, AI agents and agentic AI systems can introduce or increase risks. Some risks include goal misalignment, compounding impacts over time, and complexity in root cause identification. It can also lead to broader societal concerns around workforce impact, which need to be address through upskilling and training programs.
It is important to have foundational guardrails like those used for all types of AI systems, as well as risk-based guardrails that are more extensive in higher-risk use cases.
How do you view the tension between advancing AI capabilities and the environmental impact of training large models? What strategies would you recommend for developing more sustainable AI systems?
AI can be used to reduce carbon emissions and achieve energy efficiency, even within homes with smart thermostats and appliances, but also at larger scales in business and cities. It is important to acknowledge and address concerns around the environmental impact of training large models.
There is a lot of talk in the AI industry now about small language models which require less computing power to train, which is one way we can address this concern. Research is another important strategy, to continue to identify new and more efficient training methods.
You’ve suggested that AI regulation shouldn’t follow a one-size-fits-all approach. How might regulatory approaches differ across industries like healthcare, finance, and creative sectors?
AI regulation should be risk-based and appropriate for the use case, rather than one-size-fits all and technology focused.
This is because the same type of AI system when used in a low-risk use case such as recommending which pair of pants you might like to buy at a store can have a much different impact in a high risk use case such as recommending which medication is appropriate to treat a disease.
This risk-based perspective also highlights why regulatory approaches might differ across industries. For example, higher risk sectors such as healthcare and finance will need specific regulatory oversight that may not be necessary in creative sectors.
When AI systems make mistakes or cause harm, how should accountability be distributed among developers, deploying organisations, and other stakeholders?
Both developers and deployers should be held accountable for implementing the appropriate guardrails which are under their control. Audit logs and incident tracking mechanisms can help to identify where and when an error has occurred and facilitate the identification of the root cause.
How do you see the landscape of AI responsibility evolving from voluntary commitments to mandatory compliance? What timeline do you envision for this shift?
Voluntary commitments, principles, and frameworks have set the stage for ethical norms globally. We have seen an increase in regulatory activity in recent years, culminating in many new AI-focused laws.
For example, the long-awaited EU AI Act was passed into law in 2024. This trend is likely to continue for the foreseeable future as technology continues to become more widely adopted and draw attention from policymakers around the world.
In which sectors do you anticipate seeing the first robust, sector-specific AI regulations, and how can organisations prepare for these changes?
I believe we’ll continue to see a focus on high-risk use cases of AI, and that means that regulations in sectors like healthcare, financial services, and transportation are likely to have more robust regulations first. This is true even without AI, as these sectors tend to be more highly regulated due to the risks inherent to the type of work performed.
The interesting thing is that AI may reduce risk in sectors like healthcare, for example, in applications where AI enables medical care providers to make more accurate or faster decisions on treatment plans.
Looking ahead to 5-10 years, what do you believe will be the most significant changes in how organisations approach responsible AI development and deployment?
As AI capabilities continue to evolve rapidly, our approaches to responsible AI and deployment must adapt just as quickly to address new risks.
For example, I believe agentic AI will drive significant changes and become widely adopted in the next 5-10 years, and how we approach responsible AI development and deployment will need to reflect this. Agentic AI can enable significant productivity benefits and improve outcomes.
At the same time, it presents risks. For example, an AI agent might adopt a goal that is not aligned with the original goal envisioned by humans. Another risk is related to the compounding impact over time as the AI agent continues to operate autonomously.
It will be important to implement controls during the development phase within the AI agent’s workflow as well as during deployment to mitigate this risk. For example, monitoring the actions and API calls made by the agent.