| |

Rajesh Natarajan: Trust Is an Architecture Decision

By: Gemma Versace

February 24, 2026 21 min read

Rajesh Natarajan: Trust Is an Architecture Decision

Most AI conversations start with capability. Rajesh Natarajan starts with something harder to engineer: trust.


As global chief technology officer of Gorilla Technology Group, Raj has spent decades building AI systems for governments that must work — reliably, transparently, and at national scale. His operating premise is blunt: "Trust is not built through messaging, trust is built through architecture. And if the system in itself doesn't enable that trust, no amount of messaging in this world is going to save us."


In this episode of Keep Moving Forward, Raj joins host Gemma Versace to explore what happens when AI stops being an app and becomes infrastructure — and what engineering leaders need to get right before the stakes get any higher.

 

Rajesh Natarajan: Trust Is an Architecture Decision
  30 min
Rajesh Natarajan: Trust Is an Architecture Decision
Keep Moving Forward
Play

Trust Is an Architecture Decision, Not a Communications Strategy

AI is no longer abstract. It makes decisions. It influences outcomes. And when it operates in environments that directly affect people's lives, the margin for error changes entirely.


Raj argues that the tension between AI optimists and AI skeptics isn't a messaging problem — it's an architectural one. Much of the early AI wave was built for speed and capability, with accountability as an afterthought. "Trust cannot be added after the fact. It must be engineered into the system from the very beginning." When transparency, data ownership and predictable behavior aren't designed in from the start, no rollout plan or communications strategy fills the gap. "So when people understand how AI works, who controls it, and how it is governed, trust will naturally follow."


That reframe matters for every engineering leader, not just those building government systems. Trust isn't something you earn at the end of a product cycle. It has to be in the foundation.

The Five Decisions That Determine Whether Edge AI Gets Trusted

When deploying AI into a public or critical environment, Raj identifies five decisions that determine whether a system will be trusted long-term — and none of them are optional.

Data minimization comes first. Collect only what is absolutely necessary, because every additional data point increases risk and long-term exposure.

"Less data is not a limitation, it's actually design strength." Local processing follows — AI should operate as close to the source as possible, keeping sensitive data off networks. Third is operational visibility: "If we cannot observe this system, there's really little that we can do to trust the system." Fourth is failure-safe design, meaning graceful degradation and human override capability built in before deployment. Fifth is security from day one — including post-quantum cryptography — because the threat landscape won't wait.

None of these work in isolation.

"Trust is not created by a single feature," Raj says, and doing just one of them isn't enough. "It is created by a system that behaves predictably."

Sovereign AI Is Reshaping Where Everything Gets Built

The biggest shift Raj sees isn't a new model or framework. It's a recognition, at the national level, that AI is infrastructure — with everything that implies. "AI is not just software, it is infrastructure. And that realization is changing the mindset and perspective of pretty much everybody that I talk to."


Sovereign AI means governments and organizations are no longer willing to have critical systems depend on infrastructure they don't control. That's reshaping where data centers are built, how models are trained and how systems are secured. 


Raj's advice to CTOs isn't to overbuild for an unknowable future, but to make three foundational decisions correctly: design modular infrastructure that can evolve; make data ownership and control explicit; invest early in security architecture built to last a decade. "The goal over here is not to predict every future change ... The goal over here is to build systems that can adapt safely as the future unfolds."


Transcript

Rajesh Natarajan:

When people understand how AI works, who controls it, and how it is governed, trust will naturally follow. Trust is not built through messaging, trust is built through architecture. And if the system in itself doesn't enable that trust, no amount of messaging in this world is going to save us.

 

Gemma Versace:

Hey everyone and welcome to Keep Moving Forward, the podcast from X-Team for tech professionals who are passionate about growth, leadership and innovation. I'm your host, Gemma Versace, chief client officer at X-Team. In every episode, we sit down with leaders who are redefining how technology teams work, grow and lead. People who understand the performance begins with connection. AI has moved far beyond experiments and dashboards, and as soon as it moves into daily life, the conversation shifts. The technical challenge is no longer the bottleneck. Trust becomes the real constraint. Today I'm joined by Rajesh Natarajan, global chief technology officer at Gorilla Technology Group.

 

Gorilla builds AI infrastructure for governments around the world. Raj has spent decades designing and scaling mission-critical systems that must operate reliably at national scale. His work sits at the intersection of engineering leadership, long-term systems thinking and innovation under real-world constraint. In this conversation, we explore what happens when AI becomes infrastructure, why governance, transparency and reliability are not blockers to innovation, but enablers of it, and what it really means to build future-proof systems that may need to last decades, not product cycles. Let's get started.

 

Well, welcome Raj. Thank you so much for joining us here today on Keep Moving Forward. I'm very much looking forward to our conversation.

 

Rajesh Natarajan:

Thank you for having me on the show, Gemma.

 

Gemma Versace:

Excellent. Excellent. Well, getting straight into it, it's always great to be able to hear a little bit about the background of our guests. So can you please tell us a little bit about your background and specifically the work you do with Gorilla Technology?

 

Rajesh Natarajan:

Absolutely. So my name is Rajesh Natarajan. I'm the global CTO here at Gorilla Technologies. Personally, I've spent about a little more than three decades building large-scale technology systems, mostly starting with the consumer platforms from my days at Microsoft via products like Zoom, Windows Phone, and then moving on to other enterprise applications like Dynamics. And currently I'm focused on AI infrastructure, which is catering to national and enterprise scale. What I do today is primarily centered around building trusted AI foundations, and these typically tend to include GPU-accelerated data centers, edge AI platforms and secure infrastructure that allows governments and organizations to deploy AI while maintaining full sovereignty over their data, their models and their operations.

What really gets me going in the morning is that I believe that we are at an inflection point wherein AI is no longer just software, it's infrastructure that will shape economies, security and daily life. And hence my passion comes from building these systems that lasts and ensuring that technology we create today becomes the foundation people can trust tomorrow.

 

Gemma Versace:

Oh, fantastic. And what an absolutely wide-ranging role that you have across so many different industries. Such an interesting remit, and you can definitely hear the passion coming through in your voice around really wanting to be able to help implement the change that, as you said, is coming as part of the AI wave that we're all on. It's a good segue into our next question and one of the questions we wanted to ask is why is there so much tension between folks who seem to be high on the AI innovation wave and a crowd of people who are more skeptical or wary, distrustful of AI? And what do you see as the key to getting it right? I know it's a loaded question, but what are your views as to how to get it right?

 

Rajesh Natarajan:

Yeah, Gemma, I think at least in today's time, it's a very valid question. And in my opinion, this tension, it exists because AI is no longer abstract. It's making decisions, it's influencing outcomes and operating in environments that directly affect people's lives. And when technology moves faster than trust, skepticism is a natural and healthy response. The challenge is that much of the early AI wave was built around convenience and capability, but not on accountability. Getting it right in this particular model requires a shift in the mindset. Trust cannot be added after the fact. It must be engineered into the system from the very beginning.

And that means that transparency in how the system operates is understood, clear ownership of data and who owns that particular data, what they can do with that data is understood, and more importantly, predictable behavior of those AI systems and having mechanisms to monitor and control AI in real time becomes extremely important. So when people understand how AI works, who controls it, and how it is governed, trust will naturally follow. What is really important for me to internalize and others is that trust is not built through messaging. Trust is built through architecture. And if the system in itself doesn't enable that trust, no amount of messaging in this world is going to save us.

 

Gemma Versace:

Yeah, it's a really good call out that it is so inherent in also how people respond and buy into certain new systems and technology as well, that trust is such a critical point. Tech leaders are often finding themselves in a position of bringing in skilled talent, hired guns if you will, to augment their own employees and workforces. What are some of the most important considerations beyond the hard skills that you have to consider in these types of scenarios?

 

Rajesh Natarajan:

One of the things that I think is important in these kinds of scenarios is cultural alignment. And I believe that this alignment is foundation. And let me try to kind of explain that, because it's not about how I work being my definition of culture. It is how we actually create systems that I believe is the new definition of culture. The intricacies today is that AI systems tend to reflect the priorities and values of teams that build it, right? It's not intentional, but it is inevitable. The decisions engineers make about what data to collect, what trade-offs to accept, and even how the systems will behave in edge cases, all of these things shape real-world outcomes today. And if a team optimizes only for speed, you get fragile systems. If a team optimizes only for capability, you get systems that may lack accountability.

So what matters is building teams that understands the weight of what they are creating, and that requires a culture grounded in responsibility and not just in innovation. For example, at Gorilla we emphasized the importance of coordination, right? To me, engineers are not just writing code, they're building systems that operate in public environments, more specifically national infrastructures in places where reliability and trust are non-negotiable. So in my humble opinion, culture determines whether technology is deployed responsibly or not, because technology alone cannot do that.

 

Gemma Versace:

How critical is it to have a team that is aligned from a values perspective and to have a cohesive culture within engineering and developing teams when working on AI that also intersects with people's lives as well?

 

Rajesh Natarajan:

The reality of life is that at times we spend more time at work with our work colleagues than we do at home. That's just reality.

 

Gemma Versace:

It's crazy, yeah.

 

Rajesh Natarajan:

And as much ... It is crazy, right? I mean, I've had my share of, I still have my share of 18 hour days, but it's all working towards a specific goal. But I think this is an important point of inflection because us understanding and appreciating that the person who's sitting across from us in the office is also just like us, right? And a little bit of EQ goes a long way in controlling some of these semantics. So that's just something which is very fundamental. And at Gorilla, we tend to focus a lot on making sure that the soft skill trainings also happen to our employees so that they can cross some of these personal divides as well.

Now that being said, if you just came down to the fact that let's assume that life has only work for a second, and if you think in terms of engineering discipline and engineering values, there are two things which are extremely important to me, attitude and aptitude. And if I can hire the right people with the right attitude and the right aptitude, they will pick up skills, they will learn. Have you ever wondered that four years ago or maybe five years ago, AI wasn't completely mainstream? Right? And if you think about how in the last four years there are so many experts in AI out there, it's mind blowing, right? And it's not a field that you can actually learn that fast. But this is also a very strong indication of the fact that people with the right attitude and aptitude can apply themselves and learn.

So that's always the first starting point for us at Gorilla. When we hire people, we tend to hire people with the right attitude and aptitude. Even if the skills are not a hundred percent aligned, the first two segments will actually get us through. So that's point number one. Point number two is always making sure that the team is structured and organized towards a specific goal in a specific direction. And what that really means is that building mind share, so that everybody buys in to the overall domain that we are going after.

And because once people start buying into that particular domain, the conflicting point of views which people don't like, which I love, actually come to the table because that way it's possible for us to kind of normalize what the structure needs to be, what the idea needs to be, and make sure that we are actually going off to the right dimensions that we need to get to. So the cultural artifacts for me are, while it is important for me to take into consideration language barriers and cultural competencies of each country that we work in, I still do believe that if we can actually rally people around a common goal and ensure that their voices are heard, the kind of technology and the products that we develop and deliver will be second to none. And I think from that perspective, I am blessed and the tribe at Gorilla is blessed as well because I think we do one heck of a job.

 

Gemma Versace:

That's fantastic, and thanks so much for sharing that. And other CTOs listening to the podcast today, I think the two things that you just said there that make the best sense and make the most sense, I should say, and obviously have contributed to the wonderful success that not only you as a leader but also Gorilla Technology has had is the commitment to making sure that there is the clarity of where we're going and why we're doing it, and that then dovetails into getting more of that buy-in and excitement from the teams rather than just having people pursue their work but not really feel connected or aligned or confidently articulate what the end goal and target looks like. So I think that's so incredibly important to invest in that time with your teams to do that, and it seems like you've done a fabulous job at it.

 

Rajesh Natarajan:

We try every day.

 

Gemma Versace:

Yeah. Excellent, excellent. The work that you're doing now at Gorilla involves, you mentioned it earlier I think in the first response that you talked about, involves governmental use of AI and data storage. How much different is the issue of data storage, transparency and auditability in technology with government entities versus the private sector? I can only assume that it is quite different.

 

Rajesh Natarajan:

Here's the deal, I'm a little bit of an idealist, okay? If it's good for the public sector, it has to be good for the private sector. So to me, the bar needs to be pretty high. But the way I would like to think about this is we have to build trust by design, and that's very, very, very important. The kind of pressures the private sector has are very different from the kind of pressures that the public sector has. In this particular connotation, what is really important is how we as an organization build trust, right? So it's a different kind of a trust relationship that actually needs to be established over there so that we can make sure that we are not only walking the talk, but we're also walking by their side as we go from one phase to another phase.

On the private sector on the other hand, the trust, relationship and enforcement, I think it's done pretty much through standards today, right? So if you are to pick up a new piece of technology or software, people go think about it and say, "Hey, what does Gartner say? What does Forrester say?" And that becomes kind of a benchmark for you to go get into and life is great from that particular perspective. But at the end of the day, I think once again, it is operational trust and relationship-based trust that needs to be established in order for us to be able to scale the kind of relationships that we foster over there, which is very different from governmental relationships. So for most organizations, striking a balance in between the two is tough because the personalities are very different and the power dynamics are extremely different, but I think that's one of the things that actually makes this end of the business a lot more exciting.

 

Gemma Versace:

I was just about to say thank you so much for such interesting and thoughtful insight into the difference of public versus private. And I love how you also described that it's about you and your teams adjusting to what is needed rather than trying to adjust that of the customer. I think that that is really important as well. And I think the other piece that just is weaving itself so beautifully through the whole entire discussion here is the trust piece, that it absolutely is the foundation and it is just so critical to being able to have any type of success from either a relationship standpoint or a delivering on your promises standpoint as well.

You mentioned trust by design, and obviously in the theme of what we've just spoken about, can you give us your trust by design checklist for edge AI? If a team is deploying AI into a public or critical environment, what are three to five decisions they must get right really early on, especially around things like data minimization, local processing and operational monitoring?

 

Rajesh Natarajan:

There are five decisions that I believe that will determine whether an edge AI deployment will be trusted and sustainable. The first one is key, which is data minimization. Collect only the data that is absolutely necessary. Every additional piece of information that you want to collect because you think it's cool and I could do X, Y, Z in the future, it increases risk, complexity and long-term exposure. So it's important to internalize that less data is not a limitation, it's actually design strength. The second point is local processing. AI should operate as close to the source as possible. It reduce latency. It improves resiliency as well. And it also ensures that sensitive data does not move across thresholds or networks. So edge processing is foundational to trust. So that's the second point.

And the third one is operational visibility. And what I mean by that is that you must be able to see what the AI is doing in realtime, and that means monitoring, auditability, traceability. So if we cannot observe this system, there's really little that we can do to trust the system. The fourth one is failure safe design. Every system, especially every AI system will fail at some time, right? And what really matters when it fails is that how does it fail? Does it fail safely? Is it predictable and are there human overrides available, right? Graceful degradation is very, very, very important. And without that, life gets a little scary.

Imagine you're in a driverless car, there's no steering wheel in the car, there's no gas-beveled brakes, and you're just sitting there and the car decides to crash into a wall or want to go in that particular direction. What are you going to do? Can you control it? Correct. It's a bad example. It's a terrible example, but I think it gets the point across because that's the nature of fear that we live in. So imagine if we have designed the system, the fear that our users actually live in.

And the final one, the first one is security, security, security from day one, right? And we're not talking about traditional security but I'm also saying let's be forward-looking. We live in an era where quantum computing is just right around the corner, so are we post-quantum ready, right? So the notion is how do we get there? So including quantum-safe cryptography and any strong identity between the components is going to be important for us. And the net point over here is that trust is not created by a single feature, right? We talked about five different things over here, but just doing one of them doesn't create trust. It is created by a system that behaves predictably, right?

It has to be transparent and it needs to be secure under all or most conditions. I don't use secure and all in the same sentence because that's an idealistic world. Something might always go wrong, but you just need to keep the perspective open in the sense that have I taken the precautions that I'm aware of in order for me to keep the system safe? And I think that's a very important question for us to ask ourselves.

 

Gemma Versace:

I thought the example of the driverless car without the steering wheel and no brakes, it would be a terrible outcome, but I thought it was a very good example though to give because there is this level of fear when you can't see or you don't feel like you fully understand exactly how things like AI platforms and tools are actually getting to certain decisions or outcomes. So the fact that you talk about the auditability and traceability and you consistently need to be able to be testing and verifying it, obviously your clients at Gorilla would be very safe in the knowledge with that level of rigor from your side, but also that transparency in saying what it is and how your team go about making sure that they can deliver that comfort for clients as well.

And thank you so much for being so detailed in your response there too, I think listeners listening to that, there were some really fantastic points, particularly the one that really resonated with me about the data minimization in that if you keep looking for more or you think that there's going to be more information out there that you want to include, it actually could erode some really fantastic foundational work initially as well. Looking ahead, what trends are most likely, in your opinion, to reshape trust in AI, sovereignty expectations, regulation, post-quantum security, shifting public tolerance even? What should CTO start doing this year to try to stay ahead without overbuilding?


Rajesh Natarajan:

At least my perspective, when I think about the future, I think sovereign AI is the biggest shift that is happening right now. And I mean this because countries and organizations are basically realizing that AI is not just software, it is infrastructure. And that realization is changing the mindset and perspective of pretty much everybody that I talk to. Why? Because this infrastructure, it requires compute, it requires power, it requires cooling, it requires data, it requires control. And increasingly, there is a recognition that these elements must exist within trusted boundaries. If you have AI talking to you about what is good and bad, they're relative terms. What's good in one country is not the same in another country. So what happens over here is that cross-boundary data tend to mislead, misguide certain cultures, certain propositions.

Now, I've heard debates on both ends of the spectrum. People say, "Hey, why do you think some sort of archaic customs should still be followed? It's not modern." And there is no guidance around how certain modern customs should be embraced or attracted by civilizations which want to or wish to remain in the past. And that's a balancing act. The problem is that folks like you and I are not equipped to make a choice over there. Countries and nations want to make a choice. It's a different question if you ask me if they're making the right choice or not. But we are not here to judge, but we are here to facilitate, and that's the most important thing. And what this is doing is it's basically it's reshaping everything. It's reshaping where data centers are built, how models are created, how systems are secured.


We are also seeing that some security expectations are evolving rapidly, especially we talked about this transition towards post-quantum cryptography. This is a big thing that's happening right now. So you said it right. CTOs do not need to over build. Absolutely not, not in this day and age, but they do need to make foundational decisions correctly. And in my humble opinion, three things matter most. First, build a modular infrastructure that can evolve. Second is to design systems where data ownership and control are explicit. Third is to invest early in security architecture that will remain resilient at least for the next decade.


The goal over here is not to predict every future change. If we could do that, that would be great. I would love to be Nostradamus, but I don't think any of the CTOs are. But the goal over here is to build systems that can adapt safely as the future unfolds. That is where the real secret in essence is. And this is where me, as a CTO, I love to dream and I really hope that the other CTOs listening on this particular conversation also choose to dream along those lines.


Gemma Versace:

Yeah, fantastic. I think that is such wonderful advice. And for the CTOs that are listening that aren't dreamers, they probably will be by the end of this because the way that you are able to really articulate and share how you have been able to develop the culture but also the outcomes at Gorilla, they should be absolutely looking at your playbook, Raj, and being able to implement it within their own teams and businesses. Now, last question that we do ask all guests on the Keep Moving Forward podcast, what keeps you moving forward every day? You mentioned what gets you out of bed in the morning, but what are the things that you specifically do and what advice do you have for listeners around how you keep motivated to keep moving forward?

 

Rajesh Natarajan:

It helps that I'm a cup is half full kind of guy.

 

Gemma Versace:

Great.

 

Rajesh Natarajan:

So I am not quite sure who to thank for that, but that certainly helped me a lot. But what really motivates me and what really gets me going is to build things that last. The issue with technology today is that it moves so fast, just so fast. I mean, I read something today, tomorrow it's obsolete. Go figure, right? But the underlying philosophy is that infrastructure shapes the future for decades. So whilst technology basically evolves on a daily basis, infrastructure doesn't have that particularly liberty. The opportunity that we have today is to build AI systems that are not only powerful, but they're trusted, we spoke about this earlier, and the fact that once that trust is earned, it becomes foundation on which everything else is built.

And that responsibility that I carry on my shoulders when I talk to my current existing customers or prospective customers, may they be in the public sector or on the private sector, it makes my work a lot more meaningful because what I'm actually doing and what we at Gorilla are doing with the help of the entire tribe is we are trying to establish that infrastructure that's going to lay the foundation for the growth of enterprises or nations for the next 10, 20 years.

Gemma Versace:

Amazing. Thank you so much, Raj. I have absolutely thoroughly enjoyed this conversation and thanks for joining us here today.

Rajesh Natarajan:

No, absolutely. It was my pleasure.

Gemma Versace:

When AI is confined to an app or an internal tool, the margin for error feels different. But once it powers a city or safeguards critical infrastructure, the stakes change. Reliability becomes a leadership discipline. Trust becomes a product requirement. What stayed me from this conversation is the idea that acceptance cannot be an afterthought. You cannot bolt trust onto a system once it is deployed. It has to be defined early. These are not compliance details. These are core design decisions. There is a powerful reminder here about execution under constraint. The teams that thrive are not the ones chasing speed at any cost. They're the ones who build resilient systems, clear governance and operational discipline into the foundation.

Trustworthy AI is not just about better models. It is about leadership that treats long-term reliability, transparency and sovereignty as part of innovation itself. Join us next time for more conversations with technology leaders who inspire us to grow, lead, and innovate. You can find us on Apple Podcasts, Spotify, or YouTube Music. If you enjoyed this episode, please share it with your network. We'll see you next time.

SHARE:

arrow_upward