Picture: THE NEW YORK TIMES/TONY CENICOLA
Picture: THE NEW YORK TIMES/TONY CENICOLA

NEW YORK — The computer systems that run our world — the ones that secure our financial information, protect our privacy and even keep our power grid running — all have a critical, unpatchable weakness.

It is the humans who use them.

As the toll of data breaches and hacks mounts, and the spectre of a "cyber Pearl Harbour" looms, it is worth asking: how do we defend against a breach not of our computers, but our minds?

This problem has plagued every network since the dawn of connectivity, says Maria Konnikova, author of the new book The Confidence Game, an investigation of the minds and methods of con artists.

Almost as soon as there were wires, there was wire fraud.

Fred and Charley Gondorf, two brothers who operated in New York around the turn of the 20th century, orchestrated a scheme in which they convinced people that a disgruntled telegraph operator would tip them to results of horse races before the information reached betting houses.

Of course, there was no disgruntled operator — just the Gondorfs, who made off with whatever wager the mark had put down.

"To this day, the weak link in all of the interactions between (humans and information technology) systems is the human who is making a mistake," says Moran Cerf, a former hacker turned neuroscientist.

Mr Cerf’s twin passions give him a unique insight into the problem, which he says has only grown worse as technology has progressed.

That is why those who fall for e-mail or phishing scams, which can unleash widespread cyberattacks, continue to do so despite publicity and corporate training.

The information hackers and con artists need to persuade someone to trust them is more readily available than ever.

If you have ever accepted a friend request on Facebook from someone you do not know, even someone with whom Facebook says you have mutual friends, you are part of the problem.

Facebook is a huge trove of everything from our contacts to our whereabouts, and tons of information about us that we do not even know we are revealing can be gleaned from it by clever algorithms, from our tastes to our politics, says Mr Cerf.

Friending strangers on Facebook through fake accounts — and then leveraging mutual connections to gain access to the network of a mark — is a common tactic of the "social engineering" style of hacking.

Of course, social media is just the beginning.

"Amazon wish lists are a treasure trove, so is your eBay bidding history," says Ms Konnikova.

All this stuff is there for the taking, and you do not have to be a sophisticated hacker to compile pretty intricate profiles of a person.

Whenever someone has information about us, we are more likely to trust them.

That insight has helped hackers sharpen phishing attacks, in which they spam corporate inboxes with e-mails that can be targeted to individuals in ways that make these e-mails look more credible.

These more-personalised "spear phishing" attacks are more likely to succeed because they come from someone we know — or think we know.

The moment someone downloads an attachment to an e-mail or clicks on a link, their system is infected, and attackers can move laterally through a network, quickly progressing from a lowly press officer’s computer into an IT system’s most sensitive innards.

You might ask who would be naive enough to be taken in.

The answer is plenty of us.

In one study of 150,000 test e-mails sent to two of its security partners, researchers at Verizon Enterprise Solutions found that 23% of recipients opened the e-mail, and 11% clicked on the attachment, which under normal circumstances would have carried a payload of malware.

Or, as Verizon’s 2015 data breach report so colourfully put it, "a campaign of just 10 e-mails yields a greater than 90% chance that at least one person will become the criminal’s prey, and it’s bag it, tag it, sell it to the butcher."

The obvious solution to this problem is to teach people to be more wary of everything in their inbox.

But history has shown that doesn’t work.

Banks, in particular, are spending huge sums trying to teach their employees not to open suspicious e-mails.

But how can you do that effectively when, for example, you are JPMorgan Chase — which recently suffered a breach of data of about 76-million households — and you have more than 250,000 employees?

"Securing the computers of 250,000 people, or getting 250,000 people to comply, is a virtual impossibility," says Shawn Henry, president of cybersecurity firm CrowdStrike and a former executive assistant director at the Federal Bureau of Investigation (FBI).

The solution, says Mr Henry, is to assume that humans will fail, and automate around them.

Better e-mail filtering can make a huge difference, as well as systems within a company’s IT infrastructure that work almost like an immune system, monitoring internal traffic to catch malware after it has already infected the system.

Given the long and storied history of con artists and their modern equivalents, hackers who use social engineering, we really have no choice.

Despite more than a decade of attempting to educate people about phishing attacks, Verizon’s report says, they remain the second-most-common point of entry into an IT system, and have been on the rise since 2011.

They are also the most popular way for governments to conduct cyber-espionage, which means they are the leading edge of what are potentially the most dangerous intrusions.

History has shown us we are not going to win this war by changing human behaviour.

But maybe we can build systems that are so locked down that humans lose the ability to make dumb mistakes.

Until we gain the ability to upgrade the human brain, it is the only way.

More Africa news from The Wall Street Journal

More news from The Wall Street Journal

Premium access to WSJ.com: $1 a week for 12 weeks