If Isaac Newton were to be a data scientist


Wanting to predict remaining useful life of a submersible pump or an turbo-engine on a plane? Or performance of an oil field? Or price of crude oil? If you are a data scientist, chances are that someone with pedigree harder than that of a data scientist has asked you the following question with an annoying smile on their face – “How does your model account for the underlying physics, you know real science, of these situations?” – and you have stumbled and bumbled your way out, unless, of course, you were a physicist before someone told you about an easier way to make a living than trying to come up with a theory of everything, not a small probability but still leaving out most of us. Well, merely claiming that many machine-learning models are theoretically capable of learning any physics model, a claim that may be true but in such an impractical way that it might as well be false, may come across as smug at the very least, merely unbelievable if you are lucky and downright fraudulent if you are dressed too well. Still, I thought why not get a bit deeper into this and muse what would’ve happened if machine-learning had come to be known before Newton watched that apple fall to the ground. Not possible unless an abacus is the computer of your choice, but, hey, let’s run with it for a while longer. Newton would’ve first done what most data scientists do, which, of course, is ask for data – more the better. In Newton’s case, since he also set up the problem, that meant collecting that data himself. Damn it. I hate it when they don’t even have data. But what data? It’s likely that Newton would have tried to predict how long an apple takes to fall to the ground. Or a book dropped from a building. Or something heavier from a different height. Then, his data would have been the weight of the object, the height it was dropped from and the time it took for the object to fall to the ground. After weeks of dropping of objects, clandestinely, naturally, to avoid raising suspicions among the less scientifically-inclined, he put together a little matrix of tabular data – weight, height and time, which could have looked something like this:


To start with, Newton, the data scientist, likely would have thrown the data as is at a straightforward linear regression. He would have been pleasantly surprised at how good the results look. A R-square of 0.986, standard error of 0.155. A very low p-value for two of three coefficients. Many a data scientists would take this without a second thought. If this model were to be deployed for a specific application – when should a stone be dropped to hit an approaching enemy’s helmeted head, for example – this type of accuracy is more than good enough. No real understanding of anything remotely connected to theory of gravitation is to be found here. But it does the job. Does this count as success? Does the model account of the underlying physics? No, but yet the model is not entirely wrong either. In fact, it’s good enough for that application.

 Linear Regression Model R

Linear Regression Model R

Newton may have gone further and tried another standard trick in linear regression – try a log-linear model. The results would have been even better. A R-square of 0.995, standard error of 0.029. Even lower p-values for all coefficients. In terms of the application, the model would not be terrible different in its prediction, at least in the range of heights the training data was collected, but the model seems clearly better. What the data scientist may not stop to ponder is that the log model is not merely more accurate, it has discovered a key effect of the classical theory of gravity – a body will fall to earth at a constant rate of acceleration and that rate is independent of the mass of the body. Strictly speaking, the model has a small not zero coefficient against the weight. Newton, the data scientist, may not again stop to wonder if it’s exactly zero. To complicate matters, all the real-world issues step in. Atmosphere, variable wind resistance, inaccuracy in measurements of weight, time and height, presence of other bodies so on and so forth. These factors will stop even a perfect experiment to come up with an exact zero coefficient against the body weight. The platonic ideal form of the elegant Newton’s laws may not have ever been discovered by Newton, the data scientist. But like before, this does not deter from the usefulness, or even accuracy, of the log-linear regression model.

 Log-linear Regression Model

Log-linear Regression Model

How far can this be taken? After all, we haven't even gone beyond regression yet. Could all the measurements of planetary bodies taken by astronomers prior to Newton’s arrival on the scene been fed by Newton into a machine-learning model for him to create a representation of the full theory of gravitation? Could enough measurements data be used to learn and predict Einstein’s relativity theory, explaining the minute discrepancies in astronomical calculations from the Newtonian’s version? We are not talking about building a full understanding of the physics, of course, but still a good enough model for predictions in similar situations. The answer is a qualified yes. With enough data, a predictive model that performs very close to the ideal form can be built. The qualification is owed to the very real and often prohibitive issues in getting that finely granular data and also identifying and training the right machine-learning model in practical amount of time. Linear, log-linear or even generalized linear regression are certainly not going to be enough, but a multi-layered Boltzmann machine would, to an arbitrary level of precision, even though it's possible that even with all the data, the training algorithms fail to find the the absolute best fit, which might be the only fit that reflects the actual physics.

If Newton were to be a data scientist then, he may have discovered the laws of motion and gravity but without really realizing it. These “laws” would have been approximate and statistical in nature and represented only through computational models. But they still would have been good enough to predict seasons and motion of planetary bodies. 

So, the next time someone asks you to build a machine-learning model to predict a natural phenomenon without learning the physics of it, feel free to be hesitate but don’t shy away from it entirely. Chances are that within certain range the machine-learning model you create will perform well enough. If you have access to some type of physics model for that phenomenon, you can use it to create the right machine-learning model – sometimes as a Bayesian prior, sometimes to scale the variables and still other times to determine the right form of functional parameters for the machine-learning model.

Can your product pass the Turing Test?

Before you start huffing and puffing, let me explain what I mean. I don’t mean Turing Test in the usual sense. Let’s take the example of Google Maps. Imagine a Turing Test of a restricted variety in which you ask standard mapping questions to two agents, one being human and the other being Google Maps, in a language just powerful enough to ask mapping questions. No free-flow conversation in a human language is permitted. This restricted language could be akin to the mapping API Google provides. Can the receiver of answers distinguish a human from Google Maps? For the sake of argument, assume that response time is not an issue here.

While you think about this, let me jump ahead and address smart alecks who will inevitably get hot under the collar at the prospect of being matched by a machine and protest that the inability to identify agents correctly still doesn’t prove that Google Maps is intelligent. To make their case, they might cite the case of an electronic calculator. They’d say that the results of the calculations are the same independent of how they’re performed. So the two agents will give identical answers and hence become indistinguishable to the receiver. But certainly a calculator is not intelligent. Hence, neither is a mapping service like Google’s.

I retort thusly (as a computer from last century might say). One, I wasn’t really talking about intelligence, you questionably-intelligent aleck! I was simply asking if Google Maps will pass the restricted Turing Test. Who cares if Google Maps is intelligent? This common confusion seems to spring from mixing up intelligence with being a human. And sure enough, Turing Test is ultimately not really a test for intelligence, which no one has properly defined anyway. It’s really a test to verify if the responding agent behind the screen is human, a far more easily defined concept.

Two, the smart aleck’s argument must be flawed even otherwise because it appears to apply to any service that works as well as a human being would and not just a calculator. For example, replace Google Maps with Google Translator, not the current version which will certainly not fool anyone, but a version in the future that really works very well. It could be version 100 or version 1000, it doesn’t really matter. Let’s say you interact with that future version in the same limited way – you type in a paragraph in one language and it prints back the perfect translation in the required language.  Now even the smart aleck will wonder if the translating agent is not just human but intelligent. What changed? Well, it’s a matter of gradation. We have always associated language and its infinite nuances with intelligence. This machine is not conversing, just translating, but still it’s not so easy to dismiss the case of the machine being intelligent anymore.

Coming back to Google Maps, what say you? Will it pass the imagined Turing Test? The irony here is that it will not only be able to answer most location and directions questions you can ever expect a human to answer, but will also have to be dumbed down in some way to represent an average human. After all, who among us remembers the shortest path from Fairbanks to Buenos Aires, something I bet many of us at one point or the other tried on Google Maps just to see what comes up. This dumbing down doesn’t seem too difficult really. Just program Google Maps to throw up its hands if the directions involve more than a handful of steps and say, “How’d I know that dude?”

All this suggests that restricted-Turing tests are a useful concept. Specifically, they can be very useful in gauging how close a product comes to what humans can provide using their putative intelligence. A calculator from even two decades ago would have passed a restricted-Turing test. Today’s Google Maps will also likely pass the test, even though the versions from a decade ago certainly would not have. And finally the current Google Translator would certainly fail the test.

So if you are a product manager out there, ask yourself this question: can my product pass the Turing Test? If your answer is yes, what can I say? May force be with you. If you think the answer is no, imagine a version that will pass the test and aim for that version. If you think the question is not even relevant, think how can you make it relevant since chances are that your product will become that much more interesting in the process.