DIGIHUB Contact us Search
header_search Close Search Search

In aviation, AI is only valuable when the objective comes before the technology. Knowing where it belongs matters as much as knowing what it can do. 79% of airlines rank generative AI and large language models as a top investment priority, making these technologies the number one focus area in airline IT strategies, according to The State of Aviation: 2025 Air Transport IT Insights.

As CTO of SITA for Aircraft, Alexandre Decombaz has spent years working with AI and flight operations, not as a strategic label but as a capability that has to earn its place in an environment where every decision has real consequences. We met him in Geneva to understand what that means in practice.

“Old structures and new systems,” Decombaz says. “That’s aviation in a nutshell.”

 

Start with the business need, not the technology

When you look at how airlines talk about AI, what’s the biggest misconception you see?

Many still treat AI itself as the transformation, when in reality AI is only valuable if it solves a real need. Some use cases, like modern chatbots, show how transformative AI can be. A few years ago, if you phrased a question incorrectly, the bot failed. Today, you can be imprecise and still get relevant answers.

The mistake is trying to put AI everywhere simply because it’s fashionable. The sequence should always be simple: start with the business need, ask whether AI genuinely helps, then design the solution.

The sequence matters more than the technology. AI is an answer, but not the objective. That is yours to define.

 

Some CIOs say they still don’t see enough meaningful AI use cases from vendors. Why does that perception exist?

In aviation, AI as a label is not good enough. Sometimes it’s expected, like the “AI” button on a TV remote that never gets used. AI must solve real operational problems: fuel optimization, disruption management, decision support. That gap is real, and I understand the frustration.

It’s not about releasing more AI features. It’s about releasing the right AI in places where uncertainty is acceptable and value is clear.

That said, vendors also have a responsibility. We can’t wait for airlines to define perfect use cases. We need to understand their pain points deeply and say, “This is where AI makes sense for you, and here’s why.”

 

An answer and an objective are not the same thing

You've said AI is an answer, not the objective. Why do you make that distinction?

Because I’ve seen the difference from both sides.

When you embed AI into products like SITA OptiFlight, you enter a completely different development reality. You might train a model for months before discovering it doesn’t behave as expected, and then you start again. That’s very different from traditional software, where progress is visible every few weeks.

At the same time, we also use AI internally, for example to help developers with testing. That’s a very different use case. AI improves how we work, but it’s not part of the operational decision chain. So there’s a critical difference between using AI for productivity or development and embedding it into systems used by pilots, dispatchers, or operations teams. Those scenarios don’t carry the same risks, and they don’t require the same mindset.

 

When you do embed AI in flight operations, where do you draw the line between what’s safe and what isn’t?

The line is safety‑critical determinism. This means that if you input A and B, you always get C. That’s what you want in many operational contexts. With AI, the same inputs might produce different, statistically plausible answers on different days.

In SITA OptiFlight, that’s acceptable. We evaluate hundreds of thousands of possible trajectories, many of which are equally optimal in fuel terms. AI helps there, and we wrap it in strict flight‑envelope rules, so every recommendation is safe to fly.

But displaying fuel onboard is a different story. That data comes directly from the aircraft. We don’t need AI to guess it. If AI tried to predict fuel between messages, it could be wrong due to turbulence, speed changes, or pilot decisions. Showing an incorrect number could mislead dispatchers and introduce risk. In those cases, AI adds uncertainty, not value.

That’s why safety‑critical information is always protected by hard rules outside the model. If a recommendation falls outside those boundaries, it’s blocked. That’s non‑negotiable.

 

Where does AI make a real difference today? What are you focusing on now?

One of the strongest areas is the Operations Control Centre. OCC teams make hundreds of decisions every day, often under pressure, with incomplete information and competing constraints. There’s rarely one correct answer. It’s about optimization, not absolutes. That’s where non‑deterministic AI becomes an advantage.

When disruptions occur, AI can analyze years of similar events, highlight patterns, and propose options based on past outcomes. It doesn’t replace human judgement. It gives teams a head start when they need it most.

And just like in SITA OptiFlight, everything stays within defined boundaries. The airline sets the rules, thresholds, and trade‑offs. AI proposes; humans decide.

That’s where AI is at its best: helping teams make better‑informed decisions, faster, without compromising safety.

 

AI, cybersecurity, and cloud are one transformation, not three

AI is the top priority for airlines, while cybersecurity remains the top challenge. Does that align with what you hear from airline leaders?

Absolutely, and the reason is behavioral, not just technical. Employees already use AI tools at home, so they naturally bring those habits into the workplace, drafting content, comparing documents, prompting models, often without realizing that data may be stored or reused. That behavior alone opens entirely new cybersecurity fronts. It’s a direct consequence of AI adoption that most organizations aren’t fully accounting for.

That’s why cybersecurity being the top challenge makes perfect sense. What still surprises me is that it’s not always treated as a strategic priority in its own right.

Cybersecurity can’t evolve separately from AI anymore. If it doesn’t grow at the same pace, AI adoption becomes fragile from day one. That’s why, for me, AI, cybersecurity, and even cloud are no longer separate projects. They are one transformation. If your AI strategy doesn't include cybersecurity, it isn't finished.

 

Start with your people and your purpose

What is the one question every airline should be asking about AI right now?

The real question isn’t whether to adopt AI. It’s how to do it safely, meaningfully, and in ways that genuinely help people do their jobs. Don’t start with the technology. Start with your people and your purpose.

AI will change how teams work whether you plan for it or not. That means educating teams, giving them the right tools, and being intentional about where AI is used.

Aviation has always balanced very old systems with very new technology. AI is simply the next chapter. If we treat it with the same discipline and pragmatism as every other critical system, it will strengthen operations, not complicate them.

 

Curious to explore more 2025 Air Transport IT Insights?

Check full report

0 Comments

Leave your comment

Leave your comment

Your email won't be shown publicly.
Captcha