When the Threat Travels With Your People: AI-Enabled Fraud and the Duty of Care Gap
The Moment of Maximum Vulnerability Crisis disruption does not simply strand travellers. It creates the conditions in which they are most susceptible to manipulation. When flights are cancelled, rebooking queues are overwhelmed and official guidance is fragmented, travellers do what comes naturally: they search for help, they post on social media, they click on links that appear to offer resolution. Fraudsters understand this behaviour precisely because it is predictable, and they have learned to exploit it at scale. The Middle East conflict of early 2026 produced a textbook case study. Within days of widespread flight disruption across the Gulf, a coordinated wave of AI-assisted fraud activity targeted passengers of Emirates, Etihad and Qatar Airways. The techniques deployed were not crude. They were architecturally sophisticated, operationally fast and, in several cases, difficult to distinguish from legitimate airline communications. For corporate travel risk managers and duty of care professionals, this is not a consumer issue that sits at arm’s length. It is an operational exposure with direct liability implications. Four Attack Vectors Your Travellers Are Facing 1. Fake Social Media Accounts Impersonating Airlines On the platform X, fraudsters constructed accounts using airline branding, logos and generic service-oriented names such as “Support Team”, “Quick Response Team” or “Guest Services Care.” These accounts actively monitored public posts from distressed passengers and replied directly, initiating contact under the appearance of legitimate assistance. Santander UK’s fraud team confirmed it had already received reports from customers caught in this pattern. Etihad Airways issued a formal advisory on 11 March 2026 confirming the existence of multiple fake accounts impersonating the airline, and clarified that its only verified accounts on X are @Etihad and @EtihadHelp. The mechanics of the scam followed a consistent pattern: the passenger is drawn into a direct message exchange, asked to confirm personal and contact details, then directed to a money transfer application under the pretence of receiving a refund. Instead, funds are debited. Duty of care implication: Your travellers are searching for help in real time, often on personal devices, using personal accounts. Their interactions with apparent airline support are invisible to your travel management infrastructure. There is no trigger in your booking or tracking platform that flags this exposure. 2. AI-Generated Identities Used to Fabricate Credibility Bellingcat’s investigation into the case of “Tamara Harema”, published on 12 March 2026, documented a more elaborate variant. An interview was published in De Telegraaf, the Netherlands’ largest newspaper, featuring a woman claiming to organise private evacuation flights from Dubai at €1,600 per seat. The article reached the desk of the Dutch Foreign Affairs Minister. Subsequent analysis found multiple AI generation artefacts in the published photograph: distorted architectural features inconsistent with the actual Burj Khalifa, a furniture anomaly, blurring on clothing and an earring that appeared to merge into the subject’s face. Flight-tracking data from Flightradar24 confirmed that no aircraft matching the described A321 departed Muscat bound for the Netherlands on the stated dates. The source who introduced “Harema” to the newspaper was a Dubai-based lawyer with a documented history of fraud-related insolvency proceedings in the Netherlands. The fraud did not require a sophisticated technical operation. It required a convincing AI-generated image, a plausible narrative, and a trusted intermediary willing to make an introduction. In a crisis environment, those three ingredients are readily assembled. Duty of care implication: When a traveller cannot secure a seat on a repatriation flight, they will seek alternatives. An AI-generated persona offering charter capacity at a credible price point, promoted through a credible channel, is indistinguishable from a legitimate operator to someone under stress and time pressure. 3. Fraudulent Refund Links Distributed via Social and Email Both Emirates and Etihad issued explicit warnings against sharing booking information, contact details or payment data in response to social media posts. The UAE Ministry of Interior separately warned on 4 March 2026 against fraudulent emails purporting to offer emergency registration, compensation or insurance, which directed recipients to fake forms designed to harvest personal and financial data. Abu Dhabi Police confirmed that fraudsters deliberately target periods of travel disruption, when passengers are actively expecting communications from airlines and official bodies, making fraudulent messages proportionally more convincing. Duty of care implication: Travellers with corporate bookings are likely to use corporate payment instruments. A successful refund scam executed through a corporate card or virtual payment credential creates both a financial exposure and a data breach event. 4. AI-Generated Service Listings Beyond the Airline Channel While not specific to the current crisis, Bellingcat’s March 2025 analysis of AI-generated product fraud on platforms including Amazon, eBay and Etsy documents the systematic use of AI-generated imagery to misrepresent goods. The techniques identified, including image inconsistencies, missing product angles, implausible pricing and fictitious seller identities, are directly transferable to the sale of fake travel services: non-existent hotel accommodation, fabricated airport transfers and fraudulent visa facilitation. During a regional crisis, demand for any available service spikes sharply. Travellers will book accommodation, ground transport and logistical support through channels they would not ordinarily use. The fraud surface expands accordingly. Why This Is a Technology and Governance Problem, Not Just User Behaviour It is tempting to frame this as a traveller awareness issue, which it partly is. However, the underlying challenge is structural. AI-generated content has crossed the threshold at which visual and contextual plausibility can no longer be reliably assessed by an individual under cognitive stress. The Harema case demonstrates this clearly: the photograph deceived a professional newsroom long enough to be published and cited at ministerial level. The expectation that a distressed traveller, operating alone, on a mobile device, in an unfamiliar environment, will perform rigorous open-source verification before clicking a link or making a payment is not a reasonable control. Corporate travel risk programmes that rely on traveller awareness as their primary defence against AI-enabled fraud are operating with an inadequate control architecture. What Robust Organisational Controls Look Like Travel risk managers and technology leads should be examining the following areas: Pre-trip briefing, updated for AI fraud vectors. Travellers operating in elevated-risk regions should receive explicit, scenario-based guidance
When the Threat Travels With Your People: AI-Enabled Fraud and the Duty of Care Gap Read More »









