
While mass automation of not only traditional manufacturing jobs but also knowledge-based professions such as law and, yes, accounting, has obvious consequences for the labor market, it raises a number of tax questions as well. Part of the reason machines are appealing to companies is that they don't need a regular paycheck like human workers. However, it is these regular paychecks to human workers that also provide much of the money that goes to the government. So when you don't have humans drawing taxable income anymore because robots have taken their place, how will the government make up for what would no doubt be a significant decrease in tax revenue?
In a recent
report from the European Parliament, one possibility could be taxing the robots instead, according to
CNN Money. This would mean that, whether your workers are flesh and blood or metal and wire, companies would still need to account for their taxes. This means that, for tax purposes at least, robots would basically be treated as people, paying into the same pensions and welfare programs as people. Except, of course, they're not people and will likely never need pensions or welfare programs.
"Bearing in mind the effects that the development and deployment of robotics and AI might have on employment and, consequently, on the viability of the social security systems of the Member States, consideration should be given to the possible need to introduce corporate reporting requirements on the extent and proportion of the contribution of robotics and AI to the economic results of a company for the purpose of taxation and social security contributions," said the report
The report also suggested that, as artificial intelligence spreads through more of the global economy, that European nations may want to consider a basic income guarantee, where people would get regular cash payments from the government.
It also goes into the realm of civil liability. Without explicitly calling for any particular legal framework, it suggests that the EU come up with an answer for the question of who is civilly liable, and for how much, for the actions of a robot that is at least somewhat autonomous. It also said that, regardless of what legal framework the EU comes up with on this matter, robots should also be registered like cars, and require insurance.
The report also looks to the future days when robots might be fully autonomous. It urged regulations to require robots to follow Asimov's
Three Laws of Robots (robots shall never harm a human being or through inaction allow a human to come to harm, robots should obey all humans unless doing so contradicts the first law, robots should protect themselves unless doing so contradicts the first and second law). It also suggested that, if robots become independent, creating a specific legal status for them as "electronic persons." This would, said the report, give them specific rights and obligations, including being liable for damage they cause.
The first proposal, regarding ethical programming in robots, suggests that members of the European Parliament may not have actually read the stories that feature these three laws. Asimov had written at length at how these three laws are not as ironclad as one might believe, and how they can break down under various circumstances or, alternately, lead to poor outcomes while still technically remaining within the three laws. For example,
The Evitable Conflict shows robots deciding that the best way to follow the first law is to simply take over humanity--for our protection, of course.
The requirement also raises questions regarding semi-autonomous trading algorithms that currently dominate the market. How broadly will harm be defined here? Because while the economy does house a large number of win-win solutions, losers are inevitable. If trading programs were unable to harm any human, this might hobble their function. On the other hand, if "harm" were defined narrow enough so this wouldn't happen, it could render the whole thing moot.
Of course, that's a problem that won't actually be relevant for generations and so governments have years to prepare a solution. Though considering governments' track records on addressing looming problems people have known about for years, this may not inspire the most confidence.