By: Jeannie Bonilla, LATAM Business Lead at TeKnowledge
28th May 2025, San Salvador, El Salvador
This week, I had the opportunity to speak at the first AI Mastery Series Workshop taking place in Central America, El Salvador was the first location where TeKnowledge hosted the workshop. I’ve delivered many sessions over the years, but this one stood out. We didn’t just talk about how AI works, we explored what it means to use it responsibly.
In a room filled with senior executives from both the private and public sectors—people facing AI-related challenges daily—we tackled one of the most important and often overlooked questions in tech: How do we ensure AI serves people fairly, safely, and transparently?
The Questions That Matter
AI is moving fast. In El Salvador and across Latin America, it’s being used in everything from public services and healthcare to financial systems. This growth brings huge potential—but also serious responsibility. That’s why one of the questions we circled around for most of the session was:
If AI is making decisions that affect people’s lives, who takes responsibility when something goes wrong?
And equally important: What frameworks or principles should guide us when designing and using these systems?
To answer that, we focused on four key areas of ethical AI that reflect the same values we expect in human decision-making. The difference now is that we have to embed them into systems, processes, and data.
- Fairness
Human comparison: Avoiding discrimination in hiring or services.
When a person hires someone or makes a service decision, they (ideally) focus on merit—not race, gender, or background. But people have unconscious biases, and when AI is trained on data reflecting those biases, it can amplify them at scale.
If a company has mostly hired men in the past, an AI trained on that history might begin to prioritize male candidates, without being explicitly told to.
Just like we ask people to challenge their assumptions, we must audit and test AI systems for bias. Fairness isn’t automatic; it must be designed and maintained.
- Transparency
Human comparison: Being able to explain a decision.
If a loan application is denied by a human, you can usually ask why and get an explanation. But with AI, decisions can come from a black box, using complex algorithms that no one can easily explain.
An AI might deny a person access to a medical procedure or financial product, and neither the user nor the provider can clearly explain the reason.
People deserve to understand AI decisions. Transparency means making models and processes interpretable—not just for developers, but for those affected as well.
- Privacy
Human comparison: Respecting confidentiality.
We expect doctors or public officials to protect our personal information, but AI systems often rely on large-scale data collection, and privacy can become an afterthought.
For example, an agentic AI might collect voice recordings beyond what’s needed, or personal data may be used to train models without explicit consent.
Privacy must be intentional. Ethical AI requires data minimization, informed consent, and strong safeguards, especially in regions where laws are still catching up.
- Accountability
Human comparison: Owning the outcome of a decision.
When a human makes a poor decision, they can be held accountable. But who is to blame when AI makes a mistake? Is it the developer? The manager? The company?
A predictive system might wrongly flag someone as high-risk for a loan or a crime, and blaming “the system” isn’t good enough.
Clear ownership across every stage of the AI lifecycle must be defined. Someone—a human—must always be answerable. Responsibility doesn’t disappear with automation; it just shifts.
Ultimately Human
El Salvador has a unique opportunity right now. As the country builds out its digital capabilities, it can choose not only to adopt the latest tools, but also to lead with integrity and care. The leaders in the room understood this. They brought real curiosity to the table, asking how to protect people’s rights, earn trust, and create solutions that are safe and inclusive from the very beginning. They valued progress that is shared, trusted, and sustainable.
If you’re working with AI—whether you build it, manage it, or make decisions about where it’s used—ethics is your responsibility too. We can’t afford to leave it to chance or to someone else.
The systems we create today will shape people’s lives tomorrow.
About the AI Mastery Series
The AI Mastery Series is a month-long initiative by TeKnowledge to bring meaningful conversations about AI to key leaders across Latin America. It began in El Salvador, and will continue through Costa Rica and Mexico, creating space for reflection, learning, and leadership around responsible innovation.
Follow the journey via #AIMasterySeries, stay up to date by following TeKnowledge, and reach out if you’d like to take part in a future session.