Balancing the paradox of protecting one in all the world’s leading travel, software and services businesses against the accelerating threats of AI illustrates why CISOs have to be steps ahead of the most recent adversarial AI tradecraft and attack strategies.
As a number one global B2B travel platform, American Express Global Business Travel (Amex GBT) and its security team are doing just that, proactively confronting this challenge with a dual concentrate on cybersecurity innovation and governance. With deep roots in a bank holding company, Amex GBT upholds the very best data privacy standards, security compliance and risk management. This makes secure, scalable AI adoption a mission-critical priority.
Amex GBT Chief Information Security Officer David Levin is leading this effort. He is constructing a cross-functional AI governance framework, embedding security into every phase of AI deployment and managing the rise of shadow AI without stifling innovation. His approach offers a blueprint for organizations navigating the high-stakes intersection of AI advancement and cyber defense.
The following are excerpts from Levin’s interview with VentureBeat:
VentureBeat: How is Amex GBT using AI to modernize threat detection and SOC operations?
David Levin: We’re integrating AI across our threat detection and response workflows. On the detection side, we use machine learning (ML) models in our SIEM and EDR tools to identify malicious behavior faster and with fewer false positives. That alone accelerates how we investigate alerts. In the SOC, AI-powered automation enriches alerts with contextual data the moment they seem. Analysts open a ticket and already see critical details; there’s not a have to pivot between multiple tools for basic information.
AI also helps prioritize which alerts are likely urgent. Our analysts then spend their time on the highest-risk issues relatively than sifting through noise. It’s an enormous boost in efficiency. We can respond at machine speed where it is smart, and let our expert security engineers concentrate on complex incidents. Ultimately, AI helps us detect threats more accurately and respond faster.
VentureBeat: You also work with managed security partners like CrowdStrike OverWatch. How does AI function a force multiplier for each in-house and external SOC teams?
Levin: AI amplifies our capabilities in two ways. First, CrowdStrike OverWatch gives us 24/7 threat hunting augmented by advanced machine learning. They consistently scan the environment for subtle signs of an attack, including things we’d miss if we relied on manual inspection alone. That means now we have a top-tier threat intelligence team on call, using AI to filter out low-risk events and highlight real threats.
Second, AI boosts the efficiency of our internal SOC analysts. We used to manually triage much more alerts. Now, an AI engine handles that initial filtering. It can quickly distinguish suspicious from benign, so analysts only see the events that need human judgment. It seems like adding a wise virtual teammate. Our staff can handle more incidents, concentrate on threat hunting, and pick up advanced investigations. That synergy—human expertise plus AI support—drives higher outcomes than either alone
VentureBeat: You’re heading up an AI governance framework at GBT, based on NIST principles. What does that appear like, and the way do you implement it cross-functionally?
Levin: We leaned on the NIST AI Risk Management Framework, which helps us systematically assess and mitigate AI-related risks around security, privacy, bias and more. We formed a cross-functional governance committee with representatives from security, legal, privacy, compliance, HR and IT. That team coordinates AI policies and ensures latest projects meet our standards before going live.
Our framework covers your entire AI lifecycle. Early on, each use case is mapped against potential risks—like model drift or data exposure—and we define controls to deal with them. We measure performance through testing and adversarial simulations to make sure the AI isn’t easily fooled. We also insist on at the least some level of explainability. If an AI flags an incident, we wish to know why. Then, once systems are in production, we monitor them to verify they still meet our security and compliance requirements. By integrating these steps into our broader risk program, AI becomes a part of our overall governance relatively than an afterthought.
VentureBeat: How do you handle shadow AI and ensure employees follow these policies?
Levin: Shadow AI emerged the moment public generative AI tools took off. Our approach starts with clear policies: Employees must not feed confidential or sensitive data into external AI services without approval. We outline acceptable use, potential risks, and the method for vetting latest tools.
On the technical side, we block unapproved AI platforms at our network edge and use data loss prevention (DLP) tools to stop sensitive content from being uploaded. If someone tries using an unauthorized AI site, they get alerted and directed to an approved alternative. We also rely heavily on training. We share real-world cautionary tales—like feeding a proprietary document right into a random chatbot. That tends to follow people. By combining user education, policy clarity and automatic checks, we will curb most rogue AI usage while still encouraging legitimate innovation.
VentureBeat: In deploying AI for security, what technical challenges do you encounter, for instance, data security, model drift, or adversarial testing?
Levin: Data security is a primary concern. Our AI often needs system logs and user data to identify threats, so we encrypt those feeds and restrict who can access them. We also ensure that no personal or sensitive information is used unless it’s strictly mandatory.
Model drift is one other challenge. Attack patterns evolve consistently. If we depend on a model trained on last yr’s data, we risk missing latest threats. We have a schedule to retrain models when detection rates drop or false positives spike.
We also do adversarial testing, essentially red-teaming the AI to see if attackers could trick or bypass it. That might mean feeding the model synthetic data that masks real intrusions, or trying to govern logs. If we discover a vulnerability, we retrain the model or add extra checks. We’re also big on explainability: if AI recommends isolating a machine, we wish to know which behavior triggered that call. That transparency fosters trust within the AI’s output and helps analysts validate it.
VentureBeat: Is AI changing the role of the CISO, making you more of a strategic business enabler than purely a compliance gatekeeper?
Levin: Absolutely. AI is a first-rate example of how security leaders can guide innovation relatively than block it. Instead of just saying, “No, that’s too dangerous,” we’re shaping how we adopt AI from the bottom up by defining acceptable use, training data standards, and monitoring for abuse. As CISO, I’m working closely with executives and product teams so we will deploy AI solutions that really profit the business, whether by improving the client experience or detecting fraud faster, while still meeting regulations and protecting data.
We even have a seat on the table for giant decisions. If a department desires to roll out a brand new AI chatbot for travel booking, they involve security early to handle risk and compliance. So we’re moving beyond the compliance gatekeeper image, entering into a job that drives responsible innovation.
VentureBeat: How is AI adoption structured globally across GBT, and the way do you embed security into that process?
Levin: We took a world center of excellence approach. There’s a core AI strategy team that sets overarching standards and guidelines, then regional leads drive initiatives tailored to their markets. Because we operate worldwide, we coordinate on best practices: if the Europe team develops a sturdy process for AI data masking to comply with GDPR, we share that with the U.S. or Asia teams.
Security is embedded from day one through “secure by design.” Any AI project, wherever it’s initiated, faces the identical risk assessments and compliance checks before launch. We do threat modeling to see how the AI could fail or be misused. We implement the identical encryption and access controls globally, but in addition adapt to local privacy rules. This ensures that irrespective of where an AI system is built, it meets consistent security and trust standards.
VentureBeat: You’ve been piloting tools like CrowdStrike’s Charlotte AI for alert triage. How are AI co-pilots helping with incident response and analyst training?
Levin: With Charlotte AI we’re offloading quite a lot of alert triage. The system immediately analyzes latest detections, estimates severity and suggests next steps. That alone saves our tier-1 analysts hours every week. They open a ticket and see a concise summary as an alternative of raw logs.
We can even interact with Charlotte, asking follow-up questions, including, “Is this IP address linked to prior threats?” This “conversational AI” aspect is a serious help to junior analysts, who learn from the AI’s reasoning. It’s not a black box; it shares context on why it’s flagging something as malicious. The net result is quicker incident response and a built-in mentorship layer for our team. We do maintain human oversight, especially for high-impact actions, but these co-pilots allow us to respond at machine speed while preserving analyst judgment.
VentureBeat: What do advances in AI mean for cybersecurity vendors and managed security service providers (MSSPs)?
Levin: AI is raising the bar for security solutions. We expect MDR providers to automate more of their front-end triage so human analysts can concentrate on the hardest problems. If a vendor can’t show meaningful AI-driven detection or real-time response, they’ll struggle to face out. Many are embedding AI assistants like Charlotte directly into their platforms, accelerating how quickly they spot and contain threats.
That said, AI’s ubiquity also means we want to see past the buzzwords. We test and validate a vendor’s AI claims—“Show us how your model learned from our data,” or “Prove it may handle these advanced threats.” The arms race between attackers and defenders will only intensify, and security vendors that master AI will thrive. I fully expect latest services—like AI-based policy enforcement or deeper forensics—emerging from this trend.
VentureBeat: Finally, what advice would you give CISOs starting their AI journey, balancing compliance needs with enterprise innovation?
Levin: First, construct a governance framework early, with clear policies and risk assessment criteria. AI is simply too powerful to deploy haphazardly. If you define what responsible AI is in your organization from the outset, you’ll avoid chasing compliance retroactively.
Second, partner with legal and compliance teams upfront. AI can cross boundaries in data privacy, mental property, and more. Having them onboard early prevents nasty surprises later.
Third, start small but show ROI. Pick a high-volume security pain point (like alert triage) where AI can shine. That quick win builds credibility and confidence to expand AI efforts. Meanwhile, put money into data hygiene—clean data is all the things to AI performance.
Fourth, train your people. Show analysts how AI helps them, relatively than replaces them. Explain how it really works, where it’s reliable and where human oversight continues to be required. A well-informed staff is more more likely to embrace these tools.
Finally, embrace a continuous-improvement mindset. Threats evolve; so must your AI. Retrain models, run adversarial tests, gather feedback from analysts. The technology is dynamic, and also you’ll have to adapt. If you do all this—clear governance, strong partnerships, ongoing measurement—AI could be an infinite enabler for security, letting you progress faster and more confidently in a threat landscape that grows by the day.
VentureBeat: Where do you see AI in cybersecurity going over the following few years, each for GBT and the broader industry?
Levin: We’re heading toward autonomous SOC workflows, where AI handles more of the alert triage and initial response. Humans oversee complex incidents, but routine tasks get fully automated. We’ll also see predictive security—AI models that forecast which systems are most in danger, so teams can patch or segment them upfront.
On a broader scale, CISOs will oversee digital trust, ensuring AI is transparent, compliant with emerging laws and never easily manipulated. Vendors will refine AI to handle all the things from advanced forensics to policy tuning. Attackers, meanwhile, will weaponize AI to craft stealthier phishing campaigns or develop polymorphic malware. That arms race makes robust governance and continuous improvement critical.
At GBT, I expect AI to permeate beyond the SOC into areas like fraud prevention in travel bookings, user behavior analytics and even personalized security training. Ultimately, security leaders who leverage AI thoughtfully will gain a competitive edge—protecting their enterprises at scale while freeing talent to concentrate on probably the most complex challenges. It’s a serious paradigm shift, but one which guarantees stronger defenses and faster innovation if we manage it responsibly.