I sat this, so long as we create robots that cannot do other tasks other than what is specifically given to them, we should be safe. As for the Artificial Intelligence, so long as we don't allow them to interact with other machines like; Cars, trains, construction vehicles, weapons, and other computers, we would be safe. Also, the law of robots could be hard-wired into their motherboards.
1. A robot cannot cause/allow injury to a human.
2. A robot must follow the orders of human so long as it doesn't interfere with the first law.
3. A robot cannot allow destruction of itself, so-long as it doesn't conflict with the first two laws.