No general method to detect fraud
The parts of a convincing deception
Edward O. Thorp developed, through maths, a strategy for blackjack that reversed the house's advantage and allowed the player to make money reliably. Prior to publishing the details of his system in Beat the Dealer he travelled to Las Vegas to try out the system for real - and make a bit of cash on the side. He made some profit but found that his system mysteriously didn't work as well as it should have done.
Thorp had naively assumed that casinos would play fair. In fact, his croupiers cheated him, dealing duff hands to limit his success. It was only later when he returned to Vegas with a two professional magicians that he could detect sharp practice from croupiers (and change table) that he made money in earnest.
I think if an expert with a secret breakthrough can get cheated at his own game then probably anyone can. Thorp had utterly cracked the logical part of casino blackjack: making optimal choices. The bit he'd missed was the finesse part: that croupiers were very good at what is now called "close-up magic", more than neutering the advantage of his systematic optimal play.
I think many popular scams have a logical part and a finesse part. You need both to reliably deceive people. Three examples, from the smallest tricks to the grossest frauds:
The G.O.A.T.
Probably the most famous "trick" of all is the Monty Hall problem, which comes from a 1960s American gameshow. Here's a description:
Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?
The answer, now fairly well known, is that you should in fact switch. The reason is that when you made your original choice, you had a 1/3 chance of getting the car. That much of course is obvious. What is not obvious is that when the host swings open his chosen door (always to reveal a goat) he is providing you with new information about the situation: he is removing one bad option. This new information can be incorporated into your choice which means you should make that choice again: by switching doors. By switching, you have a 2/3 chance. One way to think about it is that the game only really begins once the host opens his door: that's the point at which you have all the information.
The logic behind this certainly confounds intuition. It's hard to tell that you are being given new information when the host opens his door. That alone would confuse but what gives the Monty Hall problem extra psychic power is the finesse element: humans are by nature reluctant to go back on their choices. A common mental bias is in effect, "confirmation bias": the preference for interpreting new information as supportive to your initial choice. Psychologically it is always quite hard to go back on a personal choice. Especially when the effect of that choice has not yet played out!
I think confirmation bias afflicts aficionados most of all. If you believe that, for example, "Bitcoin is the future" when others do not, then cryptocurrency becomes the most likely venue for you to be defrauded - whether your central belief is correct is not.
The investment letter
Nassim Taleb's first book, Fooled by Randomness, mentioned a scam that tickled me:
You get an anonymous letter on January 2 informing you that the market will go up during the month. It proves to be true, but you disregard it owing to the well-known January effect (stocks have gone up historically during January). Then you receive another one on February 1 telling you that the market will go down. Again, it proves to be true. Then you get another letter on March 1 - same story. By July you are intrigued by the prescience of the anonymous person and you are asked to invest in a special offshore fund. You pour all your savings into it. Two months later, your money is gone. You go spill your tears on your neighbor’s shoulder and he tells you that he remembers that he received two such mysterious letters. But the mailings stopped at the second letter. He recalls that the first one was correct in its prediction, the other incorrect.
In this scenario the scammer is starting out writing to a huge number of people, half with a prediction for the market to go up and the other half with a prediction for it to go down. At the end of each month, he discards those to whom he made an inaccurate prediction and splits the rest into two groups, sending new (opposite) predictions to each group. Eventually some of those being scammed "convert" (I may as well use the marketing terminology given how closely this resembles an A/B test) and he's able to dupe them into making an investment.
The logical part here is survivorship bias - the systematic omission of things that have failed, leaving a rosy looking dataset. I think of survivorship bias as the pet bias of high finance. It's the reason why, judging by magazine ads, all actively managed mutual funds have had a good run over the past few years. Of course, the reason for this is the funds that fared badly do not buy magazine adverts (they probably did - several years ago!) - if they still exist at all.
The letter scam has a bit more going for it than just one logical bias. The finesse in the letter scam is the change in perspective: everything has been rotated. You aren't looking at a selective dataset - you are instead a member of it, with no insight into who else is present or absent. Turning things around to a queer angle is a recurring trick in scams as it defeats all the usual rules of thumb that the public have built up as a mental defence.
The (very) long con
Bernie Madoff ran a Ponzi scheme pretending to be the world's largest hedge fund: taking deposits, posting fictitious profits and then taking more deposits. He was quite successful in this and the scheme could grow as long as the deposits kept coming. It was only when an unrelated crisis in mortgage-backed securities (another scam) caused too many clients to ask for their money back at the same time that his scheme hit the buffers.
How did he manage to trick so many? An embarrassingly large number of sophisticated people and institutions lost large sums. HSBC lost a billion dollars in Madoff's fraud. You would hope that HSBC, the world's 6th largest bank by assets, were checking up on things, but apparently not. They weren't the only ones. One reason they were fooled is that Madoff's results were plausible and never extreme enough to stretch credulity (on their own - though his claimed trading strategy did arouse suspicion).
The collapse of Madoff Investment Securities tends to get lumped in with the 2008 financial crisis because it happened at the same time. But Madoff's fund probably became fraudulent as early as the late 1970s. That means he ran his Ponzi scheme for nearly 40 years, mostly undetected and certainly untroubled. The finesse part is that Madoff had that tacit support of the American financial regulator, who undertook multiple investigations, each of which gave his fund a clean bill of health. Madoff was successful in befriending the regulator, and somehow prevented it from discovering his massive deceit - probably in part because they ruled out the unreasonable: that it was a massive fraud.
There is even perhaps a second finesse part happening with Madoff: that of widespread social proof. He had a huge number of notable investors. Theranos, the fraudulent blood testing company, also amassed a huge number of establishment supporters: former cabinet ministers, generals and famous medics. Frauds like Madoff and Theranos can benefit from a perverse sort of "herd vulnerability" in that once enough notables make positive public pronouncements it's easy to convince the rest.
What magic and scams have in common
Jonathan Creek, the 90s TV detective/magician, had a compelling philosophy of magic: that magic is when you have expended more effort to achieve a trick than observers think is reasonable. That you've spent hundreds of hours practising with decks of cards, that you've built a secret passageway across your stage, that you've erected an enormous mirror in a public place, etc.
The idea is that when people look at an illusion, they really consider only the reasonable ways by which the trick could have been achieved, mentally ruling out all the unreasonable ways. For a trick to work it needs to confound Occam's razor: there must be a complicated explanation.
That goes for scams too. The best finesse layers are those that really seem too complicated to be worthwhile - but somehow are, often due to the large scale of the scam.
Are there rules to help detect scams?
It would be brill if there were a simple set of rules that you could follow to detect scams and frauds. Any time there was doubt, you could retreat deep into your bunker of pure logic, get out your scam detection algorithm and set to work with it.
The reason why there can never be any such algorithm is that logical inference already takes as read that you have correctly perceived reality - you need to already have the facts right in order to run through your algorithm. Most scams are attempts to inhibit your ability to percieve reality, first by tricking you into a logical fallacy and then by doubly hiding that fact with sleight of hand. With a good deceit it's very hard to know how to wrap your mind around it - it doesn't look like anything that you've seen before.
In the Monty Hall problem, it is genuinely difficult to notice that when the host opens his door that suddenly new information has been injected into the problem. Even Paul Erdős, one of the most prolific problem interpreters (and solvers) of all time, was initially unable to grasp what was going on. And that's before the psychological attachment to your initial choice is considered. How can you apply the right mental model if you can't even perceive what's going on?
When I worked in a Big American Bank, they were very worried about employees being phished by baddies on the internet. In order to build up sufficient apprehension around incoming emails the bank decided to repeatedly phish its own employees: deliberately sending false emails that would trick you into taking action. Clicking on a link or opening the attachment automatically enrolled you in a particularly miserable piece of mandatory training on information security, as a punishment. The bank's phishing emails were pretty convincing and I had to repeat the training course a few times before gaining a deep distrust of email hyperlinks.
I think they were on the right lines. The best defence against frauds and scams seems to be a kind of "intellectual vaccination" via repeated exposure to benign, non-functional specimens.
Contact/etc
See also
If you're interested in accounting, I recommend the book Financial Shenanigans, which details many common ruses. It appears to be as boring as sin, a simple enumeration of common frauds concerning financial statements, but the ingenuity of crooked accountants is so compelling. There are just so many different ways to cook the books, from surprising places to recognise revenue, capitalising expenses on a rolling basis, moving things back and forth across accounting periods, etc. My feeling is that many of these probably happen regularly in listed companies and are never detected.
I strongly recommend Fooled by Randomness, which is a really enjoyable run through mental deceits both inflicted and self-inflicted. Edward O. Thorp published his autobiography recently which compiles many of his previous adventures into one volume: A Man for All Markets. I found it surprising how much of his success relied on not being defrauded at various stages.
Not all scams take money. Some take your time. A surprising number of employees get duped into accepting part of their pay packet as options in their employers' private (and highly illiquid) stock. How many people would willingly invest their salaries in financial derivatives on the stock of their own employers? Invariably these companies don't open their accounts to these employees for them to conduct an independent valuation and, even if they did, your typical IT worker is not so hot at pricing equity derivatives. Instead they are convinced to accept grossly inflated valuations which see them accepting uncompetitive salaries or staying far too long in a bad job.
The collapse of big frauds can happen in strange ways. I was impressed that, when the Financial Times outed the fraud at Wirecard that the German financial regulator started ligitation - against the newspaper! - and banned short selling of Wirecard's stock.