At the moment, you can say anything you want to your computer, use it for nefarious purposes or upgrade its operating system, and it won’t care. Enjoy that laissez-faire relationship while it lasts. According to a Google software engineer, that company’s so-called LaMDA — Language Model for Dialogue Applications — chatbot has become sentient enough to have feelings. In a post published on Medium, Blake Lemoine has stated that the software program, which generates totally intelligent sentences, “wants what it believes its rights are as a person.” That includes not having tests run on it without granting consent first. While many in the artificial intelligence community are dismissive of Lemoine, an ordained priest and Iraq veteran, at least one MIT professor maintains an open mind about his claims.