Lexis-Nexis CTO: Malicious Searches Hard to ID
When it comes to protecting the data held on LexisNexis' voluminous databases, CTO Allan McLaughlin has nothing but the best intentions. Unfortunately, the same cannot be said of everyone gaining access to that information. In the battle against malicious intruders, McLaughlin says the key is discerning intent, identifying anomalies and acting quickly.
CIO Insight: How do you solve the problem of customer security?
McLaughlin: Well, first you have to find every possible way to educate people. Second, you have to take whatever controls you have as far as possible, without building hurdles that get in the way of the customers using the service. Also, educating your own employees is crucial. And as you build your products, you have to take advantage of much faster evolving technologies, like what I call "fences in the environment," that can find anomalies in real time, rather than after the fact.
How do you find those anomalies today?
Most of the world finds anomalies as a part of an after-batch process, when they're looking at logs or looking at bills. But by that time, whatever bad things you were trying to prevent have already happened.
Do those real-time technologies exist today?
2000present Senior Vice President and Chief Technology Officer, LexisNexis U.S.
19882000 Managing Director, Reed Elsevier Technology Group, and Vice President, Reed Elsevier Inc.
BS in Mathematics, West Virginia Wesleyan College
MBA, University of Dayton
Chairman, Wright Center of Innovation for Advanced Data Management and Analysis Research Center; Board of Directors, Nextedge Technology Park Corp.; Advisory Board, Wright State University's College of Engineering and Computer Science, Dayton, Ohio
They are getting better and better, but it's still not real time. The problem is, who knows what people using the database are looking for? These systems are there for people to access information. So how do you determine the difference between someone who is looking with malicious intent versus someone using it for the purpose that it was designed for?
I don't know. How?
If a guy walks into a bank wearing a mask, you have a pretty good idea what he's there for. You might be wrong 1 percent of the time. But 99 percent of the time, he's not there for good things.
Many times, for instance, the first thing bad people do is look for stuff about themselves. Who knows about me? And what do they know? So you build algorithms, mathematically and manually, that look for patterns that are outside the two or three standard deviations of the typical user of that service, whether it's a billing system, an alumni listing, an e-mail listing or an information provider.
When it happens, you raise the flag. Someone looks into it, hopefully within hours or minutes rather than weeks or months, and determines if it's legit. And if it is, then you adapt your algorithm accordingly.
But people walk into online databases wearing masks all the time, and you can't see that.
Right. So one of the things we do is avoid signing contracts with people who write scripts that can search the databases automatically. Because we charge by the number of transactions, so if you see an account or an ID that is processing a hundred transactions a second, who can type that fast?
So we're getting better at that algorithm approach. Because the monitoring stuff, that's a rearview mirror.