The Role and Ethics of AI Use in Online Dating
From Helpful Learning Tool to Problematic Deception
The Washington Post has a new podcast up this week about the ways that AI is changing online dating. Some of these ways risk crossing the line into the territory of deception. I have written previously in law review articles here and here about the issues with deception in online dating, from sexual fraud to hiding one's true identity for purposes of financial fraud or downright violence.
Deploying AI as a learning tool seems relatively unproblematic and could even turn someone into a genuinely better partner. Users of AI dating coaches have at times reported positive experiences with self-development in the relationship context. When it comes to coaching, one way to describe the line into the unethical might be the distinction between truly improving oneself versus seeking out manipulation techniques to trick others, in the genre of pick-up artists.
Those who use AI in the online dating context should ask themselves if their interaction style in the physical world will fail to reflect the image that the AI-improved texting suggested. Another, related question is whether their mate would experience frustration if they learned the extent of said AI use. It would certainly be unethical to use AI to engage in what Prof. Jill Hasday has deemed in her book on intimate lies and the law "linchpin deception," meaning to hide a known dealbreaker (sometimes in the hope of overcoming it via personal charm or the like down the line).
Another phenomenon that the WaPo podcast mentions is that AI may hide red flags (or as they call it, signals) about an individual in a profile or chat. Scholar Jennie Young, in particular, has become known for her linguistic analyses of such texts in what she calls the Burned Haystack Dating Method--accompanied now by a Facebook group boasting over 200K members. For example, she recommends that women left-swipe men who engage in so-called directive behavior (telling another user what to do) in their profiles or texting as it suggests problematic relational patterns down the line.
We can easily imagine AI being fed Young's techniques to make sure a predatory user does not tip his hand that easily. And that could assist individuals who would turn out not to be a merely "bad date" in person but in fact downright dangerous. That said, one might also picture the reverse: perhaps AI could be deployed to detect cues that another user might be problematic and/or, to come full circle, used AI in his profile or texting!
For safety purposes, the increased use of AI in text-based media may militate even more toward having a phone call or video chat before a date than was already warranted. While there remains a risk of deepfake technology being used in a video chat, this requires a greater level of sophistication on the part of predators than the mere use of a chatbot. In short, it's a safety measure far better than nothing.