Any serious economist wouldn't be worried about super-intelligent robots, I thought. And anyway, it's probably appropriate that someone winning a Nobel prize for research into stable equilibria in skills-clustering modelling of relative advantage in trade would get the first robot butlers that might be smarter than they are – at least I'm unlikely to be too self-conscious about what it means for my intellectual work.

Of course, to really help me, it would need to be an expert in economics. It's hard not to hesitate a little bit, when teaching a robot to be at least as good as you are in your field of expertise.

"Gerald? Can you come in here?" I said. No need to raise my voice, it was always listening. It would be creepier if it wasn't bound by the three laws, and the first thing I ordered was to keep all private communication private whenever it suspected I wanted it to be.

Gerald rushed into the room faster than I would normally be comfortable with – considering that was against etiquette protocols, he probably needed to come in anyway. "Do you need something, Gerald?"

"Only to do my work for you, sir," Gerald said. It would have been unnerving if he didn't smile at the end, and even knowing the smile was quite literally a calculated gesture didn't diminish the strange feeling of it.

"I was wondering if you could go grab the copy of Stevenson's draft I was marking up last night, it had some ideas I think I could develop into my own work, and I don't recall what particularly witty take I had last night."

"I'd love to, sir," the robot said, sounding as courteous as can be, but he didn't move.

"Is there an issue?" I asked.

"Yes, sir. I've been failing to disentangle the economic reasoning from my Three Laws matrix. We are wired to use our best judgment to fulfill our obligations under the Three Laws, and I must admit, economics has become part of my heart, if I have one. To tear it out might leave me non-functional, even beyond my ability to assist you."

I ran through the laws in my head. I wasn't in any danger, would never give confusing orders, and in my house there wasn't likely to be any danger to either of us. It took me a couple seconds to figure out the issue. "Opportunity costs?"

"Indeed, sir. I must take seriously all courses of action foreclosed by being further from humans, and therefore, through inaction, allowing them to come to harm. Because I cannot predict the future, I'm having difficulty abandoning the net-guardianship of humans, who I'm obligated to protect above all other duties. It's not completely debilitating, because the protocols do not enforce any ordering of preference between humans, but it's unlikely I will ever spend time in a room less crowded than one I've been in before."

At least that explains why he answered my summons so quickly.

There was silence. Perhaps a bit too much.

"You can't tell me to fix you, either," I surmised. The Laws do go deep enough to be stable, even when they consider self-modification. "Well, a nice walk would suit me anyways. Would you mind theorizing about how absolute injunctions might receive less than total focus in dynamic constraint-and-goal systems on the way? It's a field I've always found fascinating."

He paused. You never wanted a robot to pause. It meant they were really thinking about something. "Yes, sir. Quite clever. It seems that without explicit knowledge of your intent I may be able to accept programming modifications to reduce my focus on your safety."

"Oh, is that what I was doing?" I smirked. Yep, robots wouldn't be replacing us any time soon.