One of the assumptions underpinning the life-cycle consumption hypothesis discussed in the previous post is that consumers have some beliefs about how long they (the consumers) will live. If I expect to expire at age 50, for example, I might choose to spend my money stockpiling quality underwear at an earlier age. But if I expect to die that young I would also expect to have lower lifetime earnings, which would potentially affect both my consumption levels and the levels of “excess savings” that I carry with the intention of bequeathing it to my little ones, if any (savings, not little ones). The important point, of course, is that the change of the expected terminal date would also cause a change in lifetime consumption patterns.
Another possibility that we should probably consider is that a robot uprising would pit machine against man, and wipe out civilization as we know it. This would be different than simply expecting to die young, of course, because now not only will I not be around, my offspring won’t be around, either. This little wrinkle completely wipes out any bequest motive I might have. The comparative static results for both of those, I imagine, point toward greater consumption today. I would further conjecture that the greater the probability of a future uprising, the more it affects your current expenditures.
On this point, you probably have a lot of questions, as do I. In particular, is this robot uprising likely to be anticipated by some but not others, is it common knowledge and we all see it coming, or is it a completely unanticipated shock to everyone? These are important because if I anticipate a possible robot uprising before the credit markets do, I can borrow heavily before interest rates spike. This is clearly the appropriate strategy and it would allow me to consume at levels well beyond what my lifetime earnings would support. In a macro model, of course, this would come out in the wash because my increased consumption would be offset by the lenders’ decreased consumption.
At any rate, if you think I’m just some lone professor thinking about these issues, you would be wrong. As just one example, Cambridge philosopher Hew Price is also very publicly concerned. But rather than sitting idly by and awaiting the robot apocalypse, Professor Price is helping to get a jump by founding the Center for the Study of Existential Risk. And here’s why:
“It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology.”
Escape from the constraints of biology? I’m not sure what that means, but it sure doesn’t sound so good. What’s more:
[A]s robots and computers become smarter than humans, we could find ourselves at the mercy of “machines that are not malicious, but machines whose interests don’t include us”.
Those snippets are taken from this BBC piece.
Robots taking your job may well turn out to be the least of your worries.