The ethical implications would be immense as we would have to question whether A.I. deserves the same rights humans receive. Based on this, should such development of A.I. be prohibited, crippled, or allowed? If allowed, should A.I. receive equal rights?
Why should an AI be given rights? You're assuming that rights are given because something is sentient or sapient. If that were the case, then the comatose shouldn't have any rights. The ethical implcations are the same as that of a computer or hammer -- its a tool to be used.
Also, restricting research because of "what might happen" is stinks strongly of anti-intellectualism.
If the AI are basically people, then why shouldn't they be treated as people are?
You'd have to first successfully argue that AI are people.