Become a member

Get the best offers and updates relating to Liberty Case News.

― Advertisement ―

spot_img

Trump says if he loses election, Jewish voters would have ‘a lot’ to do with it

Republican presidential nominee Donald Trump again suggested that American Jews owe him their votes because of his positions on Israel, invoking dual loyalty...
HomeTechnologyMicrosoft CEO not happy with OpenAI-ScarJo Scandal, says he doesn't like AI...

Microsoft CEO not happy with OpenAI-ScarJo Scandal, says he doesn’t like AI turning human


Nadella, a major investor and close partner of OpenAI, criticised the practice of attributing human-like qualities to AI. Nadella also criticized the term “artificial intelligence,” proposing instead to call it “different intelligence” to better reflect its unique nature
read more

Microsoft CEO Satya Nadella recently expressed his reservations about humanising artificial intelligence (AI)

Nadella, a major investor and close partner of OpenAI, criticised the practice of attributing human-like qualities to AI, stating, “I don’t like anthropomorphising AI. I sort of believe it’s a tool,” in a surprising interview with Bloomberg.

He went on to emphasize the distinction between AI and human intelligence, suggesting that while AI may possess intelligence, it’s fundamentally different from human intelligence. Nadella also criticized the term “artificial intelligence,” proposing instead to call it “different intelligence” to better reflect its unique nature.

Although there’s no indication of tension between Microsoft and OpenAI, Nadella’s remarks may reflect ongoing debates within both companies’ teams of machine learning scientists and engineers. These debates are particularly relevant as advanced voice assistants, like the one involving Scarlett Johansson’s voice, become more common, raising ethical questions about consent and privacy.

The tendency to humanize AI has been observed as technology advances. For instance, Google engineer Blake Lemoine faced repercussions after suggesting that Google’s AI tool LaMDA was sentient. Similarly, incidents like the Tay chatbot, which became racist on Twitter (now X) back in 2016, highlight the risks of thinking about, as it were a human, in normal circumstances.

Despite Nadella’s assertion that AI should be viewed as a tool, the trend of humanizing AI continues. People are increasingly forming emotional connections with AI, whether seeking companionship or thinking of AI relationships evolving in the future.

While debates continue about the possibility of achieving Artificial General Intelligence (AGI), the immediate concern lies in addressing ethical considerations and ensuring responsible AI development. The focus shouldn’t solely be on humanizing AI but also on evaluating its potential impact, especially regarding its role in replacing human labour, from voice assistants like Scarlett Johansson’s to warehouse workers.

(With inputs from agencies)

Optimized by Optimole