Table of Contents
AI and ethics can sometimes feel like a forgotten subject. With all the hype around what AI can do, most of the focus is on the opportunities. But alongside the excitement, there are also real considerations about how AI is done — and how to minimise risks.
To be fair, several of the Microsoft AI learning paths and certifications I’ve taken recently did spend a lot of time on ethics. That reinforced for me that this isn’t just an academic add-on. It’s essential if AI is to be accepted by both customers and staff, and to avoid the reputational risks that come from misuse or lack of transparency.
Oddly, I feel like I’ve been here before. The experience I relate it to is my time as Head of Marketing for the 2011 Census in England & Wales.
A national census is built entirely on trust, transparency, and communication. To be successful, people have to believe the purpose is real — in that case, Census data (and their response) is needed to plan public services we all rely on: education, healthcare, transport, and more. But trust is fragile. If people don’t believe their personal data will be handled ethically, they won’t take part.
The parallels with AI are striking. In both cases, privacy, data protection, and transparency are the cornerstones. For the Census, we had to explain why we were collecting over 40 personal questions (which could feel intrusive), what the benefits were (better services), and exactly how data would be kept safe — not just through systems, but through people and processes. The more transparent we were, the less room there was for mistrust.
The biggest lesson I took from that time was simple: over-communicate, and then communicate again. Only by repeating and reinforcing the “why” and the “how” could we convince the public that their data would be safe and used responsibly.
I believe the same principle applies to AI today. If customers, employees, or the public don’t trust us to handle AI responsibly, adoption will stall. That’s why ethics isn’t a side issue — it’s a leadership issue.
The Three Ethical Pillars
So what should leaders actually focus on when it comes to AI and ethics? From both my past experience and what I’ve learned through recent AI training, I believe there are three non-negotiables: transparency, fairness, and accountability.
Transparency
This is about being open with customers, staff, and stakeholders about how and where AI is being used. Just as the 2011 Census required clear explanations about why personal data was being collected, companies today need to explain why AI is applied in certain processes, what the benefits are, and how risks are managed. Transparency removes space for doubt.
Fairness
AI systems are only as good as the data they are trained on. Bias can creep in unnoticed — excluding certain groups, misrepresenting realities, or amplifying inequalities. That makes fairness a reputational risk you cannot ignore. Customers and regulators will hold your business responsible, which is why ensuring fairness is not just an ethical question — it’s business-critical.
Accountability
The final pillar is accountability. With ethical AI, leaders must establish clear human oversight. AI is not perfect — it needs to be trained, monitored, and reviewed on an ongoing basis. Trust comes from knowing that a responsible human is accountable for decisions made with AI. Just as with the 2011 Census, we had to stand up and answer tough questions about how Census data was handled, leaders today must be prepared to explain, defend, and correct the way AI is used in their organisations.
Together, these three pillars build the foundation for trust. Without them, AI adoption might still happen — but it will be fragile, and any misstep could erode confidence overnight.
Obstacles for SMEs
Talking about ethics in AI can sometimes feel abstract, but for SMEs the challenges are very real — and in my opinion, often harder to manage than for big companies and organisations. From my perspective, three obstacles stand out:
1. Limited resources
Most SMEs don’t have dedicated compliance or ethics teams. In large organisations, legal, risk, and governance functions can absorb some of this responsibility. In a mid-sized firm, those roles either don’t exist or are spread across already stretched people who might also not be experts in this field. That makes it easy for ethics to slip down the priority list.
2. Vendor reliance
Many SMEs adopt AI through third-party tools rather than building models in-house. That creates a dependency: you’re trusting vendors to get it right. But as leaders, we cannot outsource accountability. Customers and regulators will look to your organisation when something goes wrong, not your software supplier. That’s why vendor due diligence and ongoing oversight matter so much.
3. Perception that “ethics is for big corporations”
There’s a common assumption that ethics and responsible AI are issues reserved for tech giants or highly regulated industries. In reality, SMEs are just as exposed to reputational risk. In fact, smaller firms may be morevulnerable — because one misstep can have a disproportionate impact on customer trust and brand credibility.
As someone currently working in an SME, these aren’t just theoretical points. They’re the obstacles I see in my working life — and I know other SMEs face the same pressures. Compared to my Census days, when the 2011 Census had huge governance structures to lean on, SMEs today have to achieve trust with far leaner resources. But the principle remains the same: be clear, be open, and be accountable.
The truth is that for SMEs, ethical AI isn’t a nice-to-have. It’s about safeguarding your licence to operate. If customers lose confidence, they won’t wait for regulators — they’ll simply stop doing business with you.
Practical Steps Leaders Can Take
The good news is that ethical AI doesn’t have to be overwhelming. Even in SMEs with limited resources, there are practical steps leaders can take to make a real difference. From my own learning and experience, five stand out:
1. Be transparent and over-communicate
If I learned one thing from the 2011 Census, it’s that people need to know not just what you’re doing with their data, but why you’re doing it and how it will be kept safe. The same principle applies to AI. Communicate openly and repeat the message often. Over-communication builds trust and reduces room for doubt.
2. Publish an AI policy
This doesn’t need to be a 50-page document. Even a short, accessible policy that sets out how your organisation uses AI — what it does, what it doesn’t do, and who is responsible — can have a big impact. It gives clarity internally for staff, and externally it signals to customers that you’re taking AI use seriously and responsibly.
3. Keep a human in the loop
AI is great, but it isn’t perfect — and it doesn’t always get it right. In my own industry, fire safety, “getting it right” in product support or training isn’t a nice-to-have; it’s an absolute must. That’s why human oversight is critical. When risk is involved, a responsible person must check, validate, and ultimately stand behind the decision.
4. Train and communicate with your teams
Ethics can’t just sit with leadership or compliance — it has to be part of how every employee approaches AI. Training doesn’t need to be complex. What matters is communicating the basics clearly: what AI can do, where bias can creep in, what the limits are, and when to escalate to human review. This gives staff both the confidence to use AI and the awareness to use it responsibly. Don’t forget, some will be more anxious about AI than others, and this is one more reason why training and team communication are so important.
5. Tie ethics to business outcomes
Finally, remember that responsible AI isn’t just about avoiding risk. It underpins reputation, customer retention, and employee trust. In a regulated industry like fire safety, credibility is everything. If customers lose trust in the way AI is used, they won’t wait for regulators — they’ll simply stop listening to you.
None of these steps require a huge budget. What they do require is leadership commitment, clear oversight, and communication at every level.
Conclusion
Looking back, the 2011 Census taught me that trust, transparency, and over-communication were the only way to succeed. Without them, people would never have shared their personal data. The same lesson applies to AI today: if customers or employees don’t trust it, adoption will stall.
What makes this different in 2025 is the speed and scale of AI adoption. The opportunities are enormous — but so are the risks if we treat ethics as an afterthought. That’s why leaders, especially in SMEs, need to put transparency, fairness, and accountability at the centre. It’s not just a compliance issue. It’s about protecting reputation, building customer confidence, and enabling adoption at scale.
AI is not perfect. It will need oversight, training, and human judgment to make sure it works in the right way. But with clear communication, simple policies, and a culture of responsibility, SMEs can show that ethical AI isn’t just possible — it can be a real differentiator.
For me, ethics is not a brake on AI innovation. It’s the accelerator. The more people trust AI, the faster and further it can go.